1,085 research outputs found

    Wearable performance

    Get PDF
    This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2009 Taylor & FrancisWearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment. Wearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment

    Interactive visualization tools for topological exploration

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Science, 1992This thesis concerns using computer graphics methods to visualize mathematical objects. Abstract mathematical concepts are extremely difficult to visualize, particularly when higher dimensions are involved; I therefore concentrate on subject areas such as the topology and geometry of four dimensions which provide a very challenging domain for visualization techniques. In the first stage of this research, I applied existing three-dimensional computer graphics techniques to visualize projected four-dimensional mathematical objects in an interactive manner. I carried out experiments with direct object manipulation and constraint-based interaction and implemented tools for visualizing mathematical transformations. As an application, I applied these techniques to visualizing the conjecture known as Fermat's Last Theorem. Four-dimensional objects would best be perceived through four-dimensional eyes. Even though we do not have four-dimensional eyes, we can use computer graphics techniques to simulate the effect of a virtual four-dimensional camera viewing a scene where four-dimensional objects are being illuminated by four-dimensional light sources. I extended standard three-dimensional lighting and shading methods to work in the fourth dimension. This involved replacing the standard "z-buffer" algorithm by a "w-buffer" algorithm for handling occlusion, and replacing the standard "scan-line" conversion method by a new "scan-plane" conversion method. Furthermore, I implemented a new "thickening" technique that made it possible to illuminate surfaces correctly in four dimensions. Our new techniques generate smoothly shaded, highlighted view-volume images of mathematical objects as they would appear from a four-dimensional viewpoint. These images reveal fascinating structures of mathematical objects that could not be seen with standard 3D computer graphics techniques. As applications, we generated still images and animation sequences for mathematical objects such as the Steiner surface, the four-dimensional torus, and a knotted 2-sphere. The images of surfaces embedded in 4D that have been generated using our methods are unique in the history of mathematical visualization. Finally, I adapted these techniques to visualize volumetric data (3D scalar fields) generated by other scientific applications. Compared to other volume visualization techniques, this method provides a new approach that researchers can use to look at and manipulate certain classes of volume data

    Image Morphing

    Get PDF
    Morphing is also used in the gaming industry to add engaging animation to video games and computer games. However, morphing techniques are not limited only to entertainment purposes. Morphing is a powerful tool that can enhance many multimedia projects such as presentations, education, electronic book illustrations, and computer-based training

    A Novel and Effective Short Track Speed Skating Tracking System

    Get PDF
    This dissertation proposes a novel and effective system for tracking high-speed skaters. A novel registration method is employed to automatically discover key frames to build the panorama. Then, the homography between a frame and the real world rink can be generated accordingly. Aimed at several challenging tracking problems of short track skating, a novel multiple-objects tracking approach is proposed which includes: Gaussian mixture models (GMMs), evolving templates, constrained dynamical model, fuzzy model, multiple templates initialization, and evolution. The outputs of the system include spatialtemporal trajectories, velocity analysis, and 2D reconstruction animations. The tracking accuracy is about 10 cm (2 pixels). Such information is invaluable for sports experts. Experimental results demonstrate the effectiveness and robustness of the proposed system

    SPA: Sparse Photorealistic Animation using a single RGB-D camera

    Get PDF
    Photorealistic animation is a desirable technique for computer games and movie production. We propose a new method to synthesize plausible videos of human actors with new motions using a single cheap RGB-D camera. A small database is captured in a usual office environment, which happens only once for synthesizing different motions. We propose a markerless performance capture method using sparse deformation to obtain the geometry and pose of the actor for each time instance in the database. Then, we synthesize an animation video of the actor performing the new motion that is defined by the user. An adaptive model-guided texture synthesis method based on weighted low-rank matrix completion is proposed to be less sensitive to noise and outliers, which enables us to easily create photorealistic animation videos with new motions that are different from the motions in the database. Experimental results on the public dataset and our captured dataset have verified the effectiveness of the proposed method

    Space-time sketching of character animation

    Get PDF
    International audienceWe present a space-time abstraction for the sketch-based design of character animation. It allows animators to draft a full coordinated motion using a single stroke called the space-time curve (STC). From the STC we compute a dynamic line of action (DLOA) that drives the motion of a 3D character through projective constraints. Our dynamic models for the line's motion are entirely geometric, require no pre-existing data, and allow full artistic control. The resulting DLOA can be refined by over-sketching strokes along the space-time curve, or by composing another DLOA on top leading to control over complex motions with few strokes. Additionally , the resulting dynamic line of action can be applied to arbitrary body parts or characters. To match a 3D character to the 2D line over time, we introduce a robust matching algorithm based on closed-form solutions, yielding a tight match while allowing squash and stretch of the character's skeleton. Our experiments show that space-time sketching has the potential of bringing animation design within the reach of beginners while saving time for skilled artists

    Cross-Dimensional Gestural Interaction Techniques for Hybrid Immersive Environments

    Get PDF
    We present a set of interaction techniques for a hybrid user interface that integrates existing 2D and 3D visualization and interaction devices. Our approach is built around one- and two-handed gestures that support the seamless transition of data between co-located 2D and 3D contexts. Our testbed environment combines a 2D multi-user, multi-touch, projection surface with 3D head-tracked, see-through, head-worn displays and 3D tracked gloves to form a multi-display augmented reality. We also address some of the ways in which we can interact with private data in a collaborative, heterogeneous workspace

    Expression Morphing Between Different Orientations

    Get PDF
    How to generate new views based on given reference images has been an important and interesting topic in the area of image-based rendering. Two important algorithms that can be used are field morphing and view morphing. Field morphing, which is an algorithm of image morphing, generates new views based on two reference images which were taken at the same viewpoint. The most successful result of field morphing is morphing from one person\u27s face to the other one\u27s face. View morphing, which is an algorithm of view synthesis, generates in between views based on two reference views which were taken at different viewpoints for the same object. The result of view morphing is often an animation of moving one object from the viewpoint of one reference image to the viewpoint of the other one. In this thesis, we proposed a new framework that integrates field morphing and view morphing to solve the problem of expression morphing. Based on four reference images, we successfully generate the morphing from one viewpoint with one expression to another viewpoint with a different expression. We also proposed a new approach to eliminate artifacts that frequently occur in view morphing due to occlusions and in field morphing due to some unforeseen combination of feature lines. We solve these problems by relaxing the monotonicity assumption to piece-wise monotonicity along the epipolar lines. Our experimental results demonstrate the efficiency of this approach in handling occlusions for more realistic synthesis of novel views

    Algorithm animation and its application to artificial neural network learning

    Get PDF
    Algorithm animation is a means of exploring the dynamic behavior of algorithms using computer-generated graphics to represent algorithm data and operations. Research in this field has focused on the architecture of flexible environments for exploring small, complex algorithms for data structure manipulation. This thesis describes a project examining two relatively unexplored aspects of algorithm animation: issues of view design effectiveness and its application to a different type of algorithm, namely back-propagation artificial neural network learning. The work entailed developing a framework for profiling views according to attributes such as symmetry, regularity, complexity, etc. This framework was based on current research in graphical data analysis and perception and served as a means of informally evaluating the effectiveness of certain design attributes. Three animated views were developed within the framework, together with a prototype algorithm animation system to run each view and provide the user/viewer interactive control of both the learning process and the animation. Three simple artificial neural network classifiers were studied through nine structured investigations. These investigations explored various issues raised at the project outset. Findings from these investigations indicate that animated views can portray algorithm behaviors such as convergence, feature extraction, and oscillatory behavior at the onset of learning. The prototype algorithm animation system design satisfied the initial requirements of extensibility and end-user run-time control. The degree to which a view is informative was found to depend on the combined view design and the algorithm variables portrayed. Strengths and weaknesses of the view design framework were identified. Suggested improvements to the design framework, view designs and algorithm system architecture are described in the context of future work
    • …
    corecore