393 research outputs found

    The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images

    Get PDF
    The Montage Image Mosaic Engine was designed as a scalable toolkit, written in C for performance and portability across *nix platforms, that assembles FITS images into mosaics. The code is freely available and has been widely used in the astronomy and IT communities for research, product generation and for developing next-generation cyber-infrastructure. Recently, it has begun to finding applicability in the field of visualization. This has come about because the toolkit design allows easy integration into scalable systems that process data for subsequent visualization in a browser or client. And it includes a visualization tool suitable for automation and for integration into Python: mViewer creates, with a single command, complex multi-color images overlaid with coordinate displays, labels, and observation footprints, and includes an adaptive image histogram equalization method that preserves the structure of a stretched image over its dynamic range. The Montage toolkit contains functionality originally developed to support the creation and management of mosaics but which also offers value to visualization: a background rectification algorithm that reveals the faint structure in an image; and tools for creating cutout and down-sampled versions of large images. Version 5 of Montage offers support for visualizing data written in HEALPix sky-tessellation scheme, and functionality for processing and organizing images to comply with the TOAST sky-tessellation scheme required for consumption by the World Wide Telescope (WWT). Four online tutorials enable readers to reproduce and extend all the visualizations presented in this paper.Comment: 16 pages, 9 figures; accepted for publication in the PASP Special Focus Issue: Techniques and Methods for Astrophysical Data Visualizatio

    Global-Scale Resource Survey and Performance Monitoring of Public OGC Web Map Services

    Full text link
    One of the most widely-implemented service standards provided by the Open Geospatial Consortium (OGC) to the user community is the Web Map Service (WMS). WMS is widely employed globally, but there is limited knowledge of the global distribution, adoption status or the service quality of these online WMS resources. To fill this void, we investigated global WMSs resources and performed distributed performance monitoring of these services. This paper explicates a distributed monitoring framework that was used to monitor 46,296 WMSs continuously for over one year and a crawling method to discover these WMSs. We analyzed server locations, provider types, themes, the spatiotemporal coverage of map layers and the service versions for 41,703 valid WMSs. Furthermore, we appraised the stability and performance of basic operations for 1210 selected WMSs (i.e., GetCapabilities and GetMap). We discuss the major reasons for request errors and performance issues, as well as the relationship between service response times and the spatiotemporal distribution of client monitoring sites. This paper will help service providers, end users and developers of standards to grasp the status of global WMS resources, as well as to understand the adoption status of OGC standards. The conclusions drawn in this paper can benefit geospatial resource discovery, service performance evaluation and guide service performance improvements.Comment: 24 pages; 15 figure

    NPC AI System Based on Gameplay Recordings

    Get PDF
    HĂ€sti optimeeritud mitte-mĂ€ngija tegelased (MMT) on vastaste vĂ”i meeskonna kaaslastena ĂŒheks peamiseks osaks mitme mĂ€ngija mĂ€ngudes. Enamus mĂ€nguroboteid on ehitatud jĂ€ikade sĂŒsteemide peal, mis vĂ”imaldavad vaid loetud arvu otsuseid ja animatsioone. Kogenud mĂ€ngijad suudavad eristada mĂ€nguroboteid inimmĂ€ngijatest ning ette ennustada nende liigutusi ja strateegiaid. See alandab mĂ€ngukogemuse kvaliteeti. SeetĂ”ttu, eelistavad mitme mĂ€ngijaga mĂ€ngude mĂ€ngijad mĂ€ngida pigem inimmĂ€ngijate kui MMTde vastu. Virtuaalreaalsuse (VR) mĂ€ngud ja VR mĂ€ngijad on siiani veel vĂ€ike osa mĂ€ngutööstusest ja mitme mĂ€ngija VR mĂ€ngud kannatavad mĂ€ngijabaasi kaotusest, kui mĂ€nguomanikud ei suuda leida teisi mĂ€ngijaid, kellega mĂ€ngida. See uurimus demonstreerib mĂ€ngulindistustel pĂ”hineva tehisintellekt (TI) sĂŒsteemi rakendatavust VR esimese isiku vaates tulistamismĂ€ngule Vrena. TeemamĂ€ng kasutab ebatavalist liikumisesĂŒsteemi, milles mĂ€ngijad liiguvad otsiankrute abil. VR mĂ€ngijate liigutuste imiteerimiseks loodi AI sĂŒsteem, mis kasutab mĂ€ngulindistusi navigeerimisandmetena. SĂŒsteem koosneb kolmest peamisest funktsionaalsusest. Need funktsionaalsused on mĂ€ngutegevuse lindistamine, andmete töötlemine ja navigeerimine. MĂ€ngu keskkond on tĂŒkeldatud kuubikujulisteks sektoriteks, et vĂ€hendada erinevate asukohal pĂ”hinevate olekute arvu ning mĂ€ngutegevus on lindistatud ajaintervallide ja tegevuste pĂ”hjal. Loodud mĂ€ngulogid on segmenteeritud logilĂ”ikudeks ning logilĂ”ikude abil on loodud otsingutabel. Otsingutabelit kasutatakse MMT agentide navigeerimiseks ning MMTde otsuste langetamise mehanism jĂ€ljendab olek-tegevus-tasu kontseptsiooni. Loodud töövahendi kvaliteeti hinnati uuringu pĂ”hjal, millest saadi mĂ€rkimisvÀÀrset tagasisidet sĂŒsteemi tĂ€iustamiseks.A well optimized Non-Player Character (NPC) as an opponent or a teammate is a major part of the multiplayer games. Most of the game bots are built upon a rigid system with numbered decisions and animations. Experienced players can distinguish bots from hu-man players and they can predict bot movements and strategies. This reduces the quality of the gameplay experience. Therefore, multiplayer game players favour playing against human players rather than NPCs. VR game market and VR gamers are still a small frac-tion of the game industry and multiplayer VR games suffer from loss of their player base if the game owners cannot find other players to play with. This study demonstrates the applicability of an Artificial Intelligence (AI) system based on gameplay recordings for a Virtual Reality (VR) First-person Shooter (FPS) game called Vrena. The subject game has an uncommon way of movement, in which the players use grappling hooks to navigate. To imitate VR players’ movements and gestures an AI system is developed which uses gameplay recordings as navigation data. The system contains three major functionality. These functionalities are gameplay recording, data refinement, and navigation. The game environment is sliced into cubic sectors to reduce the number of positional states and gameplay is recorded by time intervals and actions. Produced game logs are segmented into log sections and these log sections are used for creating a look-up table. The lookup table is used for navigating the NPC agent and the decision mechanism followed a way similar to the state-action-reward concept. The success of the developed tool is tested via a survey, which provided substantial feedback for improving the system

    From sentence to emotion: a real-time three-dimensional graphics metaphor of emotions extracted from text

    Get PDF
    This paper presents a novel concept: a graphical representation of human emotion extracted from text sentences. The major contributions of this paper are the following. First, we present a pipeline that extracts, processes, and renders emotion of 3D virtual human (VH). The extraction of emotion is based on data mining statistic of large cyberspace databases. Second, we propose methods to optimize this computational pipeline so that real-time virtual reality rendering can be achieved on common PCs. Third, we use the Poisson distribution to transfer database extracted lexical and language parameters into coherent intensities of valence and arousal—parameters of Russell's circumplex model of emotion. The last contribution is a practical color interpretation of emotion that influences the emotional aspect of rendered VHs. To test our method's efficiency, computational statistics related to classical or untypical cases of emotion are provided. In order to evaluate our approach, we applied our method to diverse areas such as cyberspace forums, comics, and theater dialog

    An Infrastructure to Support Interoperability in Reverse Engineering

    Get PDF
    An infrastructure that supports interoperability among reverse engineering tools and other software tools is described. The three major components of the infrastructure are: (1) a hierarchy of schemas for low- and middle-level program representation graphs, (2) g4re, a tool chain for reverse engineering C++ programs, and (3) a repository of reverse engineering artifacts, including the previous two components, a test suite, and tools, GXL instances, and XSLT transformations for graphs at each level of the hierarchy. The results of two case studies that investigated the space and time costs incurred by the infrastructure are provided. The results of two empirical evaluations that were performed using the api module of g4re, and were focused on computation of object-oriented metrics and three-dimensional visualization of class template diagrams, respectively, are also provided

    Interactive visualization of video content and associated description for semantic annotation

    Get PDF
    In this paper, we present an intuitive graphic fra- mework introduced for the effective visualization of video content and associated audio-visual description, with the aim to facilitate a quick understanding and annotation of the semantic content of a video sequence. The basic idea consists in the visualization of a 2D feature space in which the shots of the considered video sequence are located. Moreover, the temporal position and the specific content of each shot can be displayed and analysed in more detail. The selected fea- tures are decided by the user, and can be updated during the navigation session. In the main window, shots of the consi- dered video sequence are displayed in a Cartesian plane, and the proposed environment offers various functionalities for automatically and semi-automatically finding and annotating the shot clusters in such feature space. With this tool the user can therefore explore graphically how the basic segments of a video sequence are distributed in the feature space, and can recognize and annotate the significant clusters and their structure. The experimental results show that browsing and annotating documents with the aid of the proposed visuali- zation paradigms is easy and quick, since the user has a fast and intuitive access to the audio-video content, even if he or she has not seen the document yet
    • 

    corecore