9,360 research outputs found

    The telematic dress: Evolving garments and distributed proprioception in streaming media and fashion performance

    Get PDF
    Centered around several short films from streaming performances created in 2005, this paper explores new ideas for movement technologies and garment design in an arts and digital research context. The "telematic dress" project, developed at the DAP Lab in Nottingham, involves transdisciplinary intersections between fashion and live performance, interactive system architecture, electronic textiles, wearable technologies, choreography, and anthropology. The concept on an evolving garment design that is materialized (moved) in live performance originates from DAP Lab's experimentation with telematics and distributed media (http://art.ntu.ac. uk/performance_research/birringer/dap.htm] addressing "connective tissues" through a study of perception/proprioception in the wearer (tactile sensory processing) and the dancer/designer/viewer relationship. This study is conducted as cross-cultural communication with online performance partners in Europe, the US, Brazil and Japan. The inter-active space is predicated on transcultural questions: how does the movement with an evolving design and wearable interactive sensors travel, how does movement - and capturing of movement - allow the design to emerge toward a garment statement, and how are bodies-in-relation-to sensory fabrics affected by the multidimensional kinesthetics of a media-rich, responsive environment

    A System for Acquiring, Processing, and Rendering Panoramic Light Field Stills for Virtual Reality

    Full text link
    We present a system for acquiring, processing, and rendering panoramic light field still photography for display in Virtual Reality (VR). We acquire spherical light field datasets with two novel light field camera rigs designed for portable and efficient light field acquisition. We introduce a novel real-time light field reconstruction algorithm that uses a per-view geometry and a disk-based blending field. We also demonstrate how to use a light field prefiltering operation to project from a high-quality offline reconstruction model into our real-time model while suppressing artifacts. We introduce a practical approach for compressing light fields by modifying the VP9 video codec to provide high quality compression with real-time, random access decompression. We combine these components into a complete light field system offering convenient acquisition, compact file size, and high-quality rendering while generating stereo views at 90Hz on commodity VR hardware. Using our system, we built a freely available light field experience application called Welcome to Light Fields featuring a library of panoramic light field stills for consumer VR which has been downloaded over 15,000 times.Comment: 15 pages, 14 figures, 2 tables, accepted by SIGGRAPH Asia 2018, low-resolution versio

    Wearable performance

    Get PDF
    This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2009 Taylor & FrancisWearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment. Wearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment

    Enabling collaboration in virtual reality navigators

    Get PDF
    In this paper we characterize a feature superset for Collaborative Virtual Reality Environments (CVRE), and derive a component framework to transform stand-alone VR navigators into full-fledged multithreaded collaborative environments. The contributions of our approach rely on a cost-effective and extensible technique for loading software components into separate POSIX threads for rendering, user interaction and network communications, and adding a top layer for managing session collaboration. The framework recasts a VR navigator under a distributed peer-to-peer topology for scene and object sharing, using callback hooks for broadcasting remote events and multicamera perspective sharing with avatar interaction. We validate the framework by applying it to our own ALICE VR Navigator. Experimental results show that our approach has good performance in the collaborative inspection of complex models.Postprint (published version

    SU(2) Lattice Gauge Theory Simulations on Fermi GPUs

    Full text link
    In this work we explore the performance of CUDA in quenched lattice SU(2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU(2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50 00050\ 000) without smearing and almost 2 0002\ 000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200×200 \times the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2×2 \times slower) than single precision computations.Comment: 20 pages, 11 figures, 3 tables, accepted in Journal of Computational Physic

    Service oriented interactive media (SOIM) engines enabled by optimized resource sharing

    Get PDF
    In the same way as cloud computing, Software as a Service (SaaS) and Content Centric Networking (CCN) triggered a new class of software architectures fundamentally different from traditional desktop software, service oriented networking (SON) suggests a new class of media engine technologies, which we call Service Oriented Interactive Media (SOIM) engines. This includes a new approach for game engines and more generally interactive media engines for entertainment, training, educational and dashboard applications. Porting traditional game engines and interactive media engines to the cloud without fundamentally changing the architecture, as done frequently, can enable already various advantages of cloud computing for such kinds of applications, for example simple and transparent upgrading of content and unified user experience on all end-user devices. This paper discusses a new architecture for game engines and interactive media engines fundamentally designed for cloud and SON. Main advantages of SOIM engines are significantly higher resource efficiency, leading to a fraction of cloud hosting costs. SOIM engines achieve these benefits by multilayered data sharing, efficiently handling many input and output channels for video, audio, and 3D world synchronization, and smart user session and session slot management. Architecture and results of a prototype implementation of a SOIM engine are discussed

    Towards the 3D Web with Open Simulator

    Get PDF
    Continuing advances and reduced costs in computational power, graphics processors and network bandwidth have led to 3D immersive multi-user virtual worlds becoming increasingly accessible while offering an improved and engaging Quality of Experience. At the same time the functionality of the World Wide Web continues to expand alongside the computing infrastructure it runs on and pages can now routinely accommodate many forms of interactive multimedia components as standard features - streaming video for example. Inevitably there is an emerging expectation that the Web will expand further to incorporate immersive 3D environments. This is exciting because humans are well adapted to operating in 3D environments and it is challenging because existing software and skill sets are focused around competencies in 2D Web applications. Open Simulator (OpenSim) is a freely available open source tool-kit that empowers users to create and deploy their own 3D environments in the same way that anyone can create and deploy a Web site. Its characteristics can be seen as a set of references as to how the 3D Web could be instantiated. This paper describes experiments carried out with OpenSim to better understand network and system issues, and presents experience in using OpenSim to develop and deliver applications for education and cultural heritage. Evaluation is based upon observations of these applications in use and measurements of systems both in the lab and in the wild

    Acceleration of stereo-matching on multi-core CPU and GPU

    Get PDF
    This paper presents an accelerated version of a dense stereo-correspondence algorithm for two different parallelism enabled architectures, multi-core CPU and GPU. The algorithm is part of the vision system developed for a binocular robot-head in the context of the CloPeMa 1 research project. This research project focuses on the conception of a new clothes folding robot with real-time and high resolution requirements for the vision system. The performance analysis shows that the parallelised stereo-matching algorithm has been significantly accelerated, maintaining 12x and 176x speed-up respectively for multi-core CPU and GPU, compared with non-SIMD singlethread CPU. To analyse the origin of the speed-up and gain deeper understanding about the choice of the optimal hardware, the algorithm was broken into key sub-tasks and the performance was tested for four different hardware architectures

    MINDtouch embodied ephemeral transference: Mobile media performance research

    Get PDF
    This is the post-print version of the final published article that is available from the link below. Copyright @ Intellect Ltd 2011.The aim of the author's media art research has been to uncover any new understandings of the sensations of liveness and presence that may emerge in participatory networked performance, using mobile phones and physiological wearable devices. To practically investigate these concepts, a mobile media performance series was created, called MINDtouch. The MINDtouch project proposed that the mobile videophone become a new way to communicate non-verbally, visually and sensually across space. It explored notions of ephemeral transference, distance collaboration and participant as performer to study presence and liveness emerging from the use of wireless mobile technologies within real-time, mobile performance contexts. Through participation by in-person and remote interactors, creating mobile video-streamed mixes, the project interweaves and embodies a daisy chain of technologies through the network space. As part of a practice-based Ph.D. research conducted at the SMARTlab Digital Media Institute at the University of East London, MINDtouch has been under the direction of Professor Lizbeth Goodman and sponsored by BBC R&D. The aim of this article is to discuss the project research, conducted and recently completed for submission, in terms of the technical and aesthetic developments from 2008 to present, as well as the final phase of staging the events from July 2009 to February 2010. This piece builds on the article (Baker 2008) which focused on the outcomes of phase 1 of the research project and initial developments in phase 2. The outcomes from phase 2 and 3 of the project are discussed in this article

    Cache policies for cloud-based systems: To keep or not to keep

    Full text link
    In this paper, we study cache policies for cloud-based caching. Cloud-based caching uses cloud storage services such as Amazon S3 as a cache for data items that would have been recomputed otherwise. Cloud-based caching departs from classical caching: cloud resources are potentially infinite and only paid when used, while classical caching relies on a fixed storage capacity and its main monetary cost comes from the initial investment. To deal with this new context, we design and evaluate a new caching policy that minimizes the overall cost of a cloud-based system. The policy takes into account the frequency of consumption of an item and the cloud cost model. We show that this policy is easier to operate, that it scales with the demand and that it outperforms classical policies managing a fixed capacity.Comment: Proceedings of IEEE International Conference on Cloud Computing 2014 (CLOUD 14
    corecore