1,000 research outputs found

    A Tracking Method for 2D canvas in MR-based interactive painting system

    Get PDF
    ABSTRACT We have proposed a mixed reality (MR) based painting system. In this paper, we tackle a problem that exists in the conventional method that fully relies on magnetic sensor that is attached on the canvas; the users needed to detach and attach a sensor on the canvas during painting when they want to switch the canvas. Therefore, we aim to automatically detect the shape of the canvas for registration purpose using vision-based tracking method. Using a region detection method such as MSER, we detect and track the shape on the canvas. We then compute the camera pose for virtually overlaying the painting result. Finally, we can generate equivalent results compared to the result of using the sensor

    A Tracking Method for 2D canvas in MR-based interactive painting system

    Get PDF
    Abstract-We have proposed a mixed reality based painting system. In this paper, we tackle a problem in our previous painting system that fully relies on magnetic sensor that is attached on the canvas; the users needed to detach and attach a sensor on the canvas during painting when they want to switch the canvas. Instead of that, in this paper, we aim to automatically detect the shape of the canvas for registration purpose. Using the shape or region detection method such as MSER (maximally stable extremal regions), we detect and track the shape on the canvas on the captured camera image. We then compute the camera pose for virtually overlay the painting result. Using the brush device, we can draw and paint freely on the tracked canvases. We show that using visual based tracking method, we can generate the equivalent result compared to the result of using the sensor

    Volumetric cloud generation using a Chinese brush calligraphy style

    Get PDF
    Includes bibliographical references.Clouds are an important feature of any real or simulated environment in which the sky is visible. Their amorphous, ever-changing and illuminated features make the sky vivid and beautiful. However, these features increase both the complexity of real time rendering and modelling. It is difficult to design and build volumetric clouds in an easy and intuitive way, particularly if the interface is intended for artists rather than programmers. We propose a novel modelling system motivated by an ancient painting style, Chinese Landscape Painting, to address this problem. With the use of only one brush and one colour, an artist can paint a vivid and detailed landscape efficiently. In this research, we develop three emulations of a Chinese brush: a skeleton-based brush, a 2D texture footprint and a dynamic 3D footprint, all driven by the motion and pressure of a stylus pen. We propose a hybrid mapping to generate both the body and surface of volumetric clouds from the brush footprints. Our interface integrates these components along with 3D canvas control and GPU-based volumetric rendering into an interactive cloud modelling system. Our cloud modelling system is able to create various types of clouds occurring in nature. User tests indicate that our brush calligraphy approach is preferred to conventional volumetric cloud modelling and that it produces convincing 3D cloud formations in an intuitive and interactive fashion. While traditional modelling systems focus on surface generation of 3D objects, our brush calligraphy technique constructs the interior structure. This forms the basis of a new modelling style for objects with amorphous shape

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    The application of traditional abstract painting in new media environments

    Get PDF
    This thesis presents an investigation into the process of new fonns of installation art; an exploration of the shifting of artistic activities from conventional studios and fine artist practices to installation art practices. A combined approach was taken whilst undertaking research by studying literature within the field, engaging with other practicing artists and conducting practical analysis. There is also a discussion of new technology in the field of abstract expressionist painting and a dialogue on the differences between traditional and digital abstract painting with regard to their processes. The reflective and issue finding processes undertaken by the researcher in this investigation are discussed in relation to the changes in his practice. The artist's experimentation with materials and processes and the implications of this as regards the relationship between the artwork and the viewer are also discussed. The thesis is divided into seven chapters of text and images with an accompanying DVD including the main abstract new media installation. The first chapter includes an introduction to the research with the methodology . applied. The second chapter involves using the computer to produce abstract painting. The third chapter then focuses on the differences between digital .and traditional abstract painting. Moving on from this the fourth chapter covers multimedia installation and its associated processes. The fifth chapter deals with the reflections on the practice element of this investigation. The sixth chapter engages with the evaluation of and feedback from the field trip and with notes from artists with regard to practical production. The final chapter draws conclusions from this research with suggestions for further studies. This thesis will make the following contributions to knowledge: developing the process of animation from 2D abstract painting to a 3D environment with the inclusion of animation; using new technology as a creative tool to enable artists to gain new insights into creative art practices which provide audiences with new experiences of new and multimedia installation; advancing the creative process of new and multimedia artworks taking account ofnew techniques relating to the manipulation of viewpoints, picture planes and pigment surface as related to traditional methods of image creation and recording and their new media counterparts.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    FACING EXPERIENCE: A PAINTER’S CANVAS IN VIRTUAL REALITY

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This research investigates how shifts in perception might be brought about through the development of visual imagery created by the use of virtual environment technology. Through a discussion of historical uses of immersion in art, this thesis will explore how immersion functions and why immersion has been a goal for artists throughout history. It begins with a discussion of ancient cave drawings and the relevance of Plato’s Allegory of the Cave. Next it examines the biological origins of “making special.” The research will discuss how this concept, combined with the ideas of “action” and “reaction,” has reinforced the view that art is fundamentally experiential rather than static. The research emphasizes how present-day virtual environment art, in providing a space that engages visitors in computer graphics, expands on previous immersive artistic practices. The thesis examines the technical context in which the research occurs by briefly describing the use of computer science technologies, the fundamentals of visual arts practices, and the importance of aesthetics in new media and provides a description of my artistic practice. The aim is to investigate how combining these approaches can enhance virtual environments as artworks. The computer science of virtual environments includes both hardware and software programming. The resultant virtual environment experiences are technologically dependent on the types of visual displays being used, including screens and monitors, and their subsequent viewing affordances. Virtual environments fill the field of view and can be experienced with a head mounted display (HMD) or a large screen display. The sense of immersion gained through the experience depends on how tracking devices and related peripheral devices are used to facilitate interaction. The thesis discusses visual arts practices with a focus on how illusions shift our cognition and perception in the visual modalities. This discussion includes how perceptual thinking is the foundation of art experiences, how analogies are the foundation of cognitive experiences and how the two intertwine in art experiences for virtual environments. An examination of the aesthetic strategies used by artists and new media critics are presented to discuss new media art. This thesis investigates the visual elements used in virtual environments and prescribes strategies for creating art for virtual environments. Methods constituting a unique virtual environment practice that focuses on visual analogies are discussed. The artistic practice that is discussed as the basis for this research also concentrates on experiential moments and shifts in perception and cognition and references Douglas Hofstadter, Rudolf Arnheim and John Dewey. iv Virtual environments provide for experiences in which the imagery generated updates in real time. Following an analysis of existing artwork and critical writing relative to the field, the process of inquiry has required the creation of artworks that involve tracking systems, projection displays, sound work, and an understanding of the importance of the visitor. In practice, the research has shown that the visitor should be seen as an interlocutor, interacting from a first-person perspective with virtual environment events, where avatars or other instrumental intermediaries, such as guns, vehicles, or menu systems, do not to occlude the view. The aesthetic outcomes of this research are the result of combining visual analogies, real time interactive animation, and operatic performance in immersive space. The environments designed in this research were informed initially by paintings created with imagery generated in a hypnopompic state or during the moments of transitioning from sleeping to waking. The drawings often emphasize emotional moments as caricatures and/or elements of the face as seen from a number of perspectives simultaneously, in the way of some cartoons, primitive artwork or Cubist imagery. In the imagery, the faces indicate situations, emotions and confrontations which can offer moments of humour and reflective exploration. At times, the faces usurp the space and stand in representation as both face and figure. The power of the placement of the caricatures in the paintings become apparent as the imagery stages the expressive moment. The placement of faces sets the scene, establishes relationships and promotes the honesty and emotions that develop over time as the paintings are scrutinized. The development process of creating virtual environment imagery starts with hand drawn sketches of characters, develops further as paintings on “digital canvas”, are built as animated, three-dimensional models and finally incorporated into a virtual environment. The imagery is generated while drawing, typically with paper and pencil, in a stream of consciousness during the hypnopompic state. This method became an aesthetic strategy for producing a snappy straightforward sketch. The sketches are explored further as they are worked up as paintings. During the painting process, the figures become fleshed out and their placement on the page, in essence brings them to life. These characters inhabit a world that I explore even further by building them into three dimensional models and placing them in computer generated virtual environments. The methodology of developing and placing the faces/figures became an operational strategy for building virtual environments. In order to open up the range of art virtual environments, and develop operational strategies for visitors’ experience, the characters and their facial features are used as navigational strategies, signposts and methods of wayfinding in order to sustain a stream of consciousness type of navigation. Faces and characters were designed to represent those intimate moments of self-reflection and confrontation that occur daily within ourselves and with others. They sought to reflect moments of wonderment, hurt, curiosity and humour that could subsequently be relinquished for more practical or purposeful endeavours. They were intended to create conditions in which visitors might reflect upon their emotional state, v enabling their understanding and trust of their personal space, in which decisions are made and the nature of world is determined. In order to extend the split-second, frozen moment of recognition that a painting affords, the caricatures and their scenes are given new dimensions as they become characters in a performative virtual reality. Emotables, distinct from avatars, are characters confronting visitors in the virtual environment to engage them in an interactive, stream of consciousness, non-linear dialogue. Visitors are also situated with a role in a virtual world, where they were required to adapt to the language of the environment in order to progress through the dynamics of a drama. The research showed that imagery created in a context of whimsy and fantasy could bring ontological meaning and aesthetic experience into the interactive environment, such that emotables or facially expressive computer graphic characters could be seen as another brushstroke in painting a world of virtual reality

    SandCanvas: A Multi-touch Art Medium Inspired by Sand Animation

    Get PDF
    10.1145/1978942.1979133Conference on Human Factors in Computing Systems - Proceedings1283-129

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    Video based dynamic scene analysis and multi-style abstraction.

    Get PDF
    Tao, Chenjun.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (leaves 89-97).Abstracts in English and Chinese.Abstract --- p.iAcknowledgements --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Window-oriented Retargeting --- p.1Chapter 1.2 --- Abstraction Rendering --- p.4Chapter 1.3 --- Thesis Outline --- p.6Chapter 2 --- Related Work --- p.7Chapter 2.1 --- Video Migration --- p.8Chapter 2.2 --- Video Synopsis --- p.9Chapter 2.3 --- Periodic Motion --- p.14Chapter 2.4 --- Video Tracking --- p.14Chapter 2.5 --- Video Stabilization --- p.15Chapter 2.6 --- Video Completion --- p.20Chapter 3 --- Active Window Oriented Video Retargeting --- p.21Chapter 3.1 --- System Model --- p.21Chapter 3.1.1 --- Foreground Extraction --- p.23Chapter 3.1.2 --- Optimizing Active Windows --- p.27Chapter 3.1.3 --- Initialization --- p.29Chapter 3.2 --- Experiments --- p.32Chapter 3.3 --- Summary --- p.37Chapter 4 --- Multi-Style Abstract Image Rendering --- p.39Chapter 4.1 --- Abstract Images --- p.39Chapter 4.2 --- Multi-Style Abstract Image Rendering --- p.42Chapter 4.2.1 --- Multi-style Processing --- p.45Chapter 4.2.2 --- Layer-based Rendering --- p.46Chapter 4.2.3 --- Abstraction --- p.47Chapter 4.3 --- Experimental Results --- p.49Chapter 4.4 --- Summary --- p.56Chapter 5 --- Interactive Abstract Videos --- p.58Chapter 5.1 --- Abstract Videos --- p.58Chapter 5.2 --- Multi-Style Abstract Video --- p.59Chapter 5.2.1 --- Abstract Images --- p.60Chapter 5.2.2 --- Video Morphing --- p.65Chapter 5.2.3 --- Interactive System --- p.69Chapter 5.3 --- Interactive Videos --- p.76Chapter 5.4 --- Summary --- p.77Chapter 6 --- Conclusions --- p.81Chapter A --- List of Publications --- p.83Chapter B --- Optical flow --- p.84Chapter C --- Belief Propagation --- p.86Bibliography --- p.8
    corecore