19 research outputs found

    Adaptive remote visualization system with optimized network performance for large scale scientific data

    Get PDF
    This dissertation discusses algorithmic and implementation aspects of an automatically configurable remote visualization system, which optimally decomposes and adaptively maps the visualization pipeline to a wide-area network. The first node typically serves as a data server that generates or stores raw data sets and a remote client resides on the last node equipped with a display device ranging from a personal desktop to a powerwall. Intermediate nodes can be located anywhere on the network and often include workstations, clusters, or custom rendering engines. We employ a regression model-based network daemon to estimate the effective bandwidth and minimal delay of a transport path using active traffic measurement. Data processing time is predicted for various visualization algorithms using block partition and statistical technique. Based on the link measurements, node characteristics, and module properties, we strategically organize visualization pipeline modules such as filtering, geometry generation, rendering, and display into groups, and dynamically assign them to appropriate network nodes to achieve minimal total delay for post-processing or maximal frame rate for streaming applications. We propose polynomial-time algorithms using the dynamic programming method to compute the optimal solutions for the problems of pipeline decomposition and network mapping under different constraints. A parallel based remote visualization system, which comprises a logical group of autonomous nodes that cooperate to enable sharing, selection, and aggregation of various types of resources distributed over a network, is implemented and deployed at geographically distributed nodes for experimental testing. Our system is capable of handling a complete spectrum of remote visualization tasks expertly including post processing, computational steering and wireless sensor network monitoring. Visualization functionalities such as isosurface, ray casting, streamline, linear integral convolution (LIC) are supported in our system. The proposed decomposition and mapping scheme is generic and can be applied to other network-oriented computation applications whose computing components form a linear arrangement

    An Optimal Ray Traversal Scheme for Visualizing Colossal Medical Volumes

    No full text
    Modern computers are unable to store the complete data of high resolution medical images in main memory. Even on secondary memory (disk), such large datasets are sometimes stored in a compressed form. At rendering time, parts of the volume are requested by the ray tracing algorithm and are loaded from disk. If one is not careful, the same regions may be (decompressed and) loaded to memory several times. Instead, a coherent algorithm should be designed that minimizes this thrashing and optimizes the time and effort spent to (uncompress and) load the volume. We present an algorithm that divides the volume into cubic cells, each (compressed and) stored on disk, in contrast to the more common slice-based storage. At rendering time, each cell is allocated a queue of rays. For a sequence of images, all rays are spawned and queued at the cells they intersect first. Cells are loaded, one at a time, in front-to-back (FTB) order. A loaded cell is rendered by all rays found in its queu..

    Don’t forget to save! User experience principles for video game narrative authoring tools.

    Get PDF
    Interactive Digital Narratives (IDNs) are a natural evolution of traditional storytelling melded with technological improvements brought about by the rapidly increasing digital revolution. This has and continues to enhance the complexities and functionality of the stories that we can tell. Video game narratives, both old and new, are considered close relatives of IDN, and due to their enhanced interactivity and presentational methods, further complicate the creation process. Authoring tool software aims to alleviate the complexities of this by abstracting underlying data models into accessible user interfaces that creatives, even those with limited technical experience, can use to author their stories. Unfortunately, despite the vast array of authoring tools in this space, user experience is often overlooked even though it is arguably one of the most vital components. This has resulted in a focus on the audience within IDN research rather than the authors, and consequently our knowledge and understanding of the impacts of user experience design decisions in authoring tools are limited. This thesis tackles the modeling of complex video game narrative structures and investigates how user experience design decisions within IDN authoring tools may impact the authoring process. I first introduce my concept of Discoverable Narrative which establishes a vocabulary for the analysis, categorization, and comparison of aspects of video game narrative that are discovered, observed, or experienced by players — something that existing models struggle to detail. I also develop and present my Novella Narrative Model which provides support for video game narrative elements and makes several novel innovations that set it apart from existing narrative models. This thesis then builds upon these models by presenting two bespoke user studies that examine the user experience of the state-of-the-art in IDN authoring tool design, together building a listing of seven general Themes and five principles (Metaphor Testing, Fast Track Testing, Structure, Experimentation, Branching) that highlight evidenced behavioral trends of authors based on different user experience design factors within IDN authoring tools. This represents some of the first work in this space that investigates the relationships between the user experience design of IDN authoring tools and the impacts that they can have on authors. Additionally, a generalized multi-stage pipeline for the design and development of IDN authoring tools is introduced, informed by professional industry- standard design techniques, in an effort to both ensure quality user experience within my own work and to raise awareness of the importance of following proper design processes when creating authoring tools, also serving as a template for doing so

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    TOWARDS A MODEL FOR ARTIFICIAL AESTHETICS - Contributions to the Study of Creative Practices in Procedural and Computational Systems

    Get PDF
    Este trabalho propõe o desenvolvimento de um modelo analítico e da terminologia a ele associada para o estudo de artefactos estéticos computacionais. Reconhecendo a presença e uso crescentes dos media computacionais, começamos por estudar como através da remediação eles transformam quantitativamente os media precedentes, e como as suas propriedades procedimentais e computacionais os afectam qualitativamente. Para perceber o potencial criativo e a especificidade dos media computacionais, desenvolvemos um modelo para a sua prática, crítica e análise. Como ponto de partida recorremos à tipologia desenvolvida por Espen Aarseth para o estudo de cibertextos, avaliando a sua adequação à análise de peças ergódicas visuais e audiovisuais, adaptando-a e expandindo-a com novas variáveis e respectivos valores. O modelo é testado através da análise de um conjunto de peças que representam diversas abordagens à criação procedimental e diversas áreas de actividade criativa contemporânea. É posteriormente desenvolvida uma análise de controlo para avaliar a usabilidade e utilidade do modelo, a sua capacidade para a elaboração de classificações objectivas e o rigor da análise. Demonstramos a adequação parcial do modelo de Aarseth para o estudo de artefactos não textuais e expandimo-lo para melhor descrever as peças estudadas. Concluímos que o modelo apresentado produz boas descrições das peças, agrupando-as logicamente, reflectindo afinidades estilísticas e procedimentais entre sistemas que, se estudados com base nas suas propriedades sensoriais ou nas suas estruturas de superfície provavelmente não revelariam muitas semelhanças. As afinidades reveladas pelo modelo são estruturais e procedimentais, e atestam a importância das características computacionais para a apreciação estética das obras. Verificamos a nossa conjectura inicial sobre a importância da procedimentalidade não só nas fases de desenvolvimento e implementação das obras mas também como base conceptual e estética na criação e apreciação artísticas, como um prazer estético

    RFID Technology in Intelligent Tracking Systems in Construction Waste Logistics Using Optimisation Techniques

    Get PDF
    Construction waste disposal is an urgent issue for protecting our environment. This paper proposes a waste management system and illustrates the work process using plasterboard waste as an example, which creates a hazardous gas when land filled with household waste, and for which the recycling rate is less than 10% in the UK. The proposed system integrates RFID technology, Rule-Based Reasoning, Ant Colony optimization and knowledge technology for auditing and tracking plasterboard waste, guiding the operation staff, arranging vehicles, schedule planning, and also provides evidence to verify its disposal. It h relies on RFID equipment for collecting logistical data and uses digital imaging equipment to give further evidence; the reasoning core in the third layer is responsible for generating schedules and route plans and guidance, and the last layer delivers the result to inform users. The paper firstly introduces the current plasterboard disposal situation and addresses the logistical problem that is now the main barrier to a higher recycling rate, followed by discussion of the proposed system in terms of both system level structure and process structure. And finally, an example scenario will be given to illustrate the system’s utilization

    Distributed pattern mining and data publication in life sciences using big data technologies

    Get PDF

    Digital Media and Textuality: From Creation to Archiving

    Get PDF
    Due to computers' ability to combine different semiotic modes, texts are no longer exclusively comprised of static images and mute words. How have digital media changed the way we write and read? What methods of textual and data analysis have emerged? How do we rescue digital artifacts from obsolescence? And how can digital media be used or taught inside classrooms? These and other questions are addressed in this volume that assembles contributions by artists, writers, scholars and editors. They offer a multiperspectival view on the way digital media have changed our notion of textuality

    Digital Media and Textuality

    Get PDF
    Due to computers' ability to combine different semiotic modes, texts are no longer exclusively comprised of static images and mute words. How have digital media changed the way we write and read? What methods of textual and data analysis have emerged? How do we rescue digital artifacts from obsolescence? And how can digital media be used or taught inside classrooms? These and other questions are addressed in this volume that assembles contributions by artists, writers, scholars and editors such as Dene Grigar, Sandy Baldwin, Carlos Reis, and Frieder Nake. They offer a multiperspectival view on the way digital media have changed our notion of textuality
    corecore