49 research outputs found

    Enhanced device-based 3D object manipulation technique for handheld mobile augmented reality

    Get PDF
    3D object manipulation is one of the most important tasks for handheld mobile Augmented Reality (AR) towards its practical potential, especially for realworld assembly support. In this context, techniques used to manipulate 3D object is an important research area. Therefore, this study developed an improved device based interaction technique within handheld mobile AR interfaces to solve the large range 3D object rotation problem as well as issues related to 3D object position and orientation deviations in manipulating 3D object. The research firstly enhanced the existing device-based 3D object rotation technique with an innovative control structure that utilizes the handheld mobile device tilting and skewing amplitudes to determine the rotation axes and directions of the 3D object. Whenever the device is tilted or skewed exceeding the threshold values of the amplitudes, the 3D object rotation will start continuously with a pre-defined angular speed per second to prevent over-rotation of the handheld mobile device. This over-rotation is a common occurrence when using the existing technique to perform large-range 3D object rotations. The problem of over-rotation of the handheld mobile device needs to be solved since it causes a 3D object registration error and a 3D object display issue where the 3D object does not appear consistent within the user’s range of view. Secondly, restructuring the existing device-based 3D object manipulation technique was done by separating the degrees of freedom (DOF) of the 3D object translation and rotation to prevent the 3D object position and orientation deviations caused by the DOF integration that utilizes the same control structure for both tasks. Next, an improved device-based interaction technique, with better performance on task completion time for 3D object rotation unilaterally and 3D object manipulation comprehensively within handheld mobile AR interfaces was developed. A pilot test was carried out before other main tests to determine several pre-defined values designed in the control structure of the proposed 3D object rotation technique. A series of 3D object rotation and manipulation tasks was designed and developed as separate experimental tasks to benchmark both the proposed 3D object rotation and manipulation techniques with existing ones on task completion time (s). Two different groups of participants aged 19-24 years old were selected for both experiments, with each group consisting sixteen participants. Each participant had to complete twelve trials, which came to a total 192 trials per experiment for all the participants. Repeated measure analysis was used to analyze the data. The results obtained have statistically proven that the developed 3D object rotation technique markedly outpaced existing technique with significant shorter task completion times of 2.04s shorter on easy tasks and 3.09s shorter on hard tasks after comparing the mean times upon all successful trials. On the other hand, for the failed trials, the 3D object rotation technique was 4.99% more accurate on easy tasks and 1.78% more accurate on hard tasks in comparison to the existing technique. Similar results were also extended to 3D object manipulation tasks with an overall 9.529s significant shorter task completion time of the proposed manipulation technique as compared to the existing technique. Based on the findings, an improved device-based interaction technique has been successfully developed to address the insufficient functionalities of the current technique

    FACING EXPERIENCE: A PAINTER’S CANVAS IN VIRTUAL REALITY

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This research investigates how shifts in perception might be brought about through the development of visual imagery created by the use of virtual environment technology. Through a discussion of historical uses of immersion in art, this thesis will explore how immersion functions and why immersion has been a goal for artists throughout history. It begins with a discussion of ancient cave drawings and the relevance of Plato’s Allegory of the Cave. Next it examines the biological origins of “making special.” The research will discuss how this concept, combined with the ideas of “action” and “reaction,” has reinforced the view that art is fundamentally experiential rather than static. The research emphasizes how present-day virtual environment art, in providing a space that engages visitors in computer graphics, expands on previous immersive artistic practices. The thesis examines the technical context in which the research occurs by briefly describing the use of computer science technologies, the fundamentals of visual arts practices, and the importance of aesthetics in new media and provides a description of my artistic practice. The aim is to investigate how combining these approaches can enhance virtual environments as artworks. The computer science of virtual environments includes both hardware and software programming. The resultant virtual environment experiences are technologically dependent on the types of visual displays being used, including screens and monitors, and their subsequent viewing affordances. Virtual environments fill the field of view and can be experienced with a head mounted display (HMD) or a large screen display. The sense of immersion gained through the experience depends on how tracking devices and related peripheral devices are used to facilitate interaction. The thesis discusses visual arts practices with a focus on how illusions shift our cognition and perception in the visual modalities. This discussion includes how perceptual thinking is the foundation of art experiences, how analogies are the foundation of cognitive experiences and how the two intertwine in art experiences for virtual environments. An examination of the aesthetic strategies used by artists and new media critics are presented to discuss new media art. This thesis investigates the visual elements used in virtual environments and prescribes strategies for creating art for virtual environments. Methods constituting a unique virtual environment practice that focuses on visual analogies are discussed. The artistic practice that is discussed as the basis for this research also concentrates on experiential moments and shifts in perception and cognition and references Douglas Hofstadter, Rudolf Arnheim and John Dewey. iv Virtual environments provide for experiences in which the imagery generated updates in real time. Following an analysis of existing artwork and critical writing relative to the field, the process of inquiry has required the creation of artworks that involve tracking systems, projection displays, sound work, and an understanding of the importance of the visitor. In practice, the research has shown that the visitor should be seen as an interlocutor, interacting from a first-person perspective with virtual environment events, where avatars or other instrumental intermediaries, such as guns, vehicles, or menu systems, do not to occlude the view. The aesthetic outcomes of this research are the result of combining visual analogies, real time interactive animation, and operatic performance in immersive space. The environments designed in this research were informed initially by paintings created with imagery generated in a hypnopompic state or during the moments of transitioning from sleeping to waking. The drawings often emphasize emotional moments as caricatures and/or elements of the face as seen from a number of perspectives simultaneously, in the way of some cartoons, primitive artwork or Cubist imagery. In the imagery, the faces indicate situations, emotions and confrontations which can offer moments of humour and reflective exploration. At times, the faces usurp the space and stand in representation as both face and figure. The power of the placement of the caricatures in the paintings become apparent as the imagery stages the expressive moment. The placement of faces sets the scene, establishes relationships and promotes the honesty and emotions that develop over time as the paintings are scrutinized. The development process of creating virtual environment imagery starts with hand drawn sketches of characters, develops further as paintings on “digital canvas”, are built as animated, three-dimensional models and finally incorporated into a virtual environment. The imagery is generated while drawing, typically with paper and pencil, in a stream of consciousness during the hypnopompic state. This method became an aesthetic strategy for producing a snappy straightforward sketch. The sketches are explored further as they are worked up as paintings. During the painting process, the figures become fleshed out and their placement on the page, in essence brings them to life. These characters inhabit a world that I explore even further by building them into three dimensional models and placing them in computer generated virtual environments. The methodology of developing and placing the faces/figures became an operational strategy for building virtual environments. In order to open up the range of art virtual environments, and develop operational strategies for visitors’ experience, the characters and their facial features are used as navigational strategies, signposts and methods of wayfinding in order to sustain a stream of consciousness type of navigation. Faces and characters were designed to represent those intimate moments of self-reflection and confrontation that occur daily within ourselves and with others. They sought to reflect moments of wonderment, hurt, curiosity and humour that could subsequently be relinquished for more practical or purposeful endeavours. They were intended to create conditions in which visitors might reflect upon their emotional state, v enabling their understanding and trust of their personal space, in which decisions are made and the nature of world is determined. In order to extend the split-second, frozen moment of recognition that a painting affords, the caricatures and their scenes are given new dimensions as they become characters in a performative virtual reality. Emotables, distinct from avatars, are characters confronting visitors in the virtual environment to engage them in an interactive, stream of consciousness, non-linear dialogue. Visitors are also situated with a role in a virtual world, where they were required to adapt to the language of the environment in order to progress through the dynamics of a drama. The research showed that imagery created in a context of whimsy and fantasy could bring ontological meaning and aesthetic experience into the interactive environment, such that emotables or facially expressive computer graphic characters could be seen as another brushstroke in painting a world of virtual reality

    NEUVis: Comparing Affective and Effective Visualisation

    Get PDF
    Data visualisations are useful for providing insight from complex scientific data. However, even with visualisation, scientific research is difficult for non-scientists to comprehend. When developed by designers in collaboration with scientists, data visualisation can be used to articulate scientific data in a way that non-experts can understand. Creating human-centred visualisations is a unique challenge, and there are no frameworks to support their design. In response, this thesis presents a practice-led study investigating design methods that can be used to develop Non-Expert User Visualisations (NEUVis), data visualisations for a general public, and the response that people have to different kinds of NEUVis. For this research, two groups of ten users participated in quantitative studies, informed by Yvonna Lincoln and Egon Guba’s method of Naturalistic Inquiry, which asked non-scientists to express their cognitive and emotional response to NEUVis using different media. The three different types of visualisations were infographics, 3D animations and an interactive installation. The installation used in the study, entitled 18S rDNA, was developed and evaluated as part of this research using John Zimmerman’s Research Through Design methodology. 18S rDNA embodies the knowledge and design methods that were developed for this research, and provided an opportunity for explication of the entire NEUVis design process. The research findings indicate that developing visualisations for the non-expert audience requires a new process, different to the way scientists visualise data. The result of this research describes how creative practitioners collaborate with primary researchers and presents a new human-centred design thinking model for NEUVis. This model includes two design tools. The first tool helps designers merge user needs with data they wish to visualise. The second tool helps designers take that merged information and begin an iterative, user-centred design process

    Eight Biennial Report : April 2005 – March 2007

    No full text

    Interim research assessment 2003-2005 - Computer Science

    Get PDF
    This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities

    Modeling and real-time rendering of participating media using the GPU

    Get PDF
    Cette thèse traite de la modélisation, l'illumination et le rendu temps-réel de milieux participants à l'aide du GPU. Dans une première partie, nous commençons par développer une méthode de rendu de nappes de brouillard hétérogènes pour des scènes en extérieur. Le brouillard est modélisé horizontalement dans une base 2D de fonctions de Haar ou de fonctions B-Spline linéaires ou quadratiques, dont les coefficients peuvent être chargés depuis une textit{fogmap}, soit une carte de densité en niveaux de gris. Afin de donner au brouillard son épaisseur verticale, celui-ci est doté d'un coefficient d'atténuation en fonction de l'altitude, utilisé pour paramétrer la rapidité avec laquelle la densité diminue avec la distance au milieu selon l'axe Y. Afin de préparer le rendu temps-réel, nous appliquons une transformée en ondelettes sur la carte de densité du brouillard, afin d'en extraire une approximation grossière (base de fonctions B-Spline) et une série de couches de détails (bases d'ondelettes B-Spline), classés par fréquence.%Les détails sont ainsi classés selon leur fréquence et, additionnées, permettent de retrouver la carte de densité d'origine. Chacune de ces bases de fonctions 2D s'apparente à une grille de coefficients. Lors du rendu sur GPU, chacune de ces grilles est traversée pas à pas, case par case, depuis l'observateur jusqu'à la plus proche surface solide. Grâce à notre séparation des différentes fréquences de détails lors des pré-calculs, nous pouvons optimiser le rendu en ne visualisant que les détails les plus contributifs visuellement en avortant notre parcours de grille à une distance variable selon la fréquence. Nous présentons ensuite d'autres travaux concernant ce même type de brouillard : l'utilisation de la transformée en ondelettes pour représenter sa densité via une grille non-uniforme, la génération automatique de cartes de densité et son animation à base de fractales, et enfin un début d'illumination temps-réel du brouillard en simple diffusion. Dans une seconde partie, nous nous intéressons à la modélisation, l'illumination en simple diffusion et au rendu temps-réel de fumée (sans simulation physique) sur GPU. Notre méthode s'inspire des Light Propagation Volumes (volume de propagation de lumière), une technique à l'origine uniquement destinée à la propagation de la lumière indirecte de manière complètement diffuse, après un premier rebond sur la géométrie. Nous l'adaptons pour l'éclairage direct, et l'illumination des surfaces et milieux participants en simple diffusion. Le milieu est fourni sous forme d'un ensemble de bases radiales (blobs), puis est transformé en un ensemble de voxels, ainsi que les surfaces solides, de manière à disposer d'une représentation commune. Par analogie aux LPV, nous introduisons un Occlusion Propagation Volume, dont nous nous servons, pour calculer l'intégrale de la densité optique entre chaque source et chaque autre cellule contenant soit un voxel du milieu, soit un voxel issu d'une surface. Cette étape est intégrée à la boucle de rendu, ce qui permet d'animer le milieu participant ainsi que les sources de lumière sans contrainte particulière. Nous simulons tous types d'ombres : dues au milieu ou aux surfaces, projetées sur le milieu ou les surfacesThis thesis deals with modeling, illuminating and rendering participating media in real-time using graphics hardware. In a first part, we begin by developing a method to render heterogeneous layers of fog for outdoor scenes. The medium is modeled horizontally using a 2D Haar or linear/quadratic B-Spline function basis, whose coefficients can be loaded from a fogmap, i.e. a grayscale density image. In order to give to the fog its vertical thickness, it is provided with a coefficient parameterizing the extinction of the density when the altitude to the fog increases. To prepare the rendering step, we apply a wavelet transform on the fog's density map, and extract a coarse approximation and a series of layers of details (B-Spline wavelet bases).These details are ordered according to their frequency and, when summed back together, can reconstitute the original density map. Each of these 2D function basis can be viewed as a grid of coefficients. At the rendering step on the GPU, each of these grids is traversed step by step, cell by cell, since the viewer's position to the nearest solid surface. Thanks to our separation of the different frequencies of details at the precomputations step, we can optimize the rendering by only visualizing details that contribute most to the final image and abort our grid traversal at a distance depending on the grid's frequency. We then present other works dealing with the same type of fog: the use of the wavelet transform to represent the fog's density in a non-uniform grid, the automatic generation of density maps and their animation based on Julia fractals, and finally a beginning of single-scattering illumination of the fog, where we are able to simulate shadows by the medium and the geometry. In a second time, we deal with modeling, illuminating and rendering full 3D single-scattering sampled media such as smoke (without physical simulation) on the GPU. Our method is inspired by light propagation volumes, a technique whose only purpose was, at the beginning, to propagate fully diffuse indirect lighting. We adapt it to direct lighting, and the illumination of both surfaces and participating media. The medium is provided under the form of a set of radial bases (blobs), and is then transformed into a set of voxels, together with solid surfaces, so that both entities can be manipulated more easily under a common form. By analogy to the LPV, we introduce an occlusion propagation volume, which we use to compute the integral of the optical density, between each source and each other cell containing a voxel either generated from the medium, or from a surface. This step is integrated into the rendering process, which allows to animate participating media and light sources without any further constraintPARIS-EST-Université (770839901) / SudocSudocFranceF

    Languages of games and play: A systematic mapping study

    Get PDF
    Digital games are a powerful means for creating enticing, beautiful, educational, and often highly addictive interactive experiences that impact the lives of billions of players worldwide. We explore what informs the design and construction of good games to learn how to speed-up game development. In particular, we study to what extent languages, notations, patterns, and tools, can offer experts theoretical foundations, systematic techniques, and practical solutions they need to raise their productivity and improve the quality of games and play. Despite the growing number of publications on this topic there is currently no overview describing the state-of-the-art that relates research areas, goals, and applications. As a result, efforts and successes are often one-off, lessons learned go overlooked, language reuse remains minimal, and opportunities for collaboration and synergy are lost. We present a systematic map that identifies relevant publications and gives an overview of research areas and publication venues. In addition, we categorize research perspectives along common objectives, techniques, and approaches, illustrated by summaries of selected languages. Finally, we distill challenges and opportunities for future research and development

    Blending the Material and Digital World for Hybrid Interfaces

    Get PDF
    The development of digital technologies in the 21st century is progressing continuously and new device classes such as tablets, smartphones or smartwatches are finding their way into our everyday lives. However, this development also poses problems, as these prevailing touch and gestural interfaces often lack tangibility, take little account of haptic qualities and therefore require full attention from their users. Compared to traditional tools and analog interfaces, the human skills to experience and manipulate material in its natural environment and context remain unexploited. To combine the best of both, a key question is how it is possible to blend the material world and digital world to design and realize novel hybrid interfaces in a meaningful way. Research on Tangible User Interfaces (TUIs) investigates the coupling between physical objects and virtual data. In contrast, hybrid interfaces, which specifically aim to digitally enrich analog artifacts of everyday work, have not yet been sufficiently researched and systematically discussed. Therefore, this doctoral thesis rethinks how user interfaces can provide useful digital functionality while maintaining their physical properties and familiar patterns of use in the real world. However, the development of such hybrid interfaces raises overarching research questions about the design: Which kind of physical interfaces are worth exploring? What type of digital enhancement will improve existing interfaces? How can hybrid interfaces retain their physical properties while enabling new digital functions? What are suitable methods to explore different design? And how to support technology-enthusiast users in prototyping? For a systematic investigation, the thesis builds on a design-oriented, exploratory and iterative development process using digital fabrication methods and novel materials. As a main contribution, four specific research projects are presented that apply and discuss different visual and interactive augmentation principles along real-world applications. The applications range from digitally-enhanced paper, interactive cords over visual watch strap extensions to novel prototyping tools for smart garments. While almost all of them integrate visual feedback and haptic input, none of them are built on rigid, rectangular pixel screens or use standard input modalities, as they all aim to reveal new design approaches. The dissertation shows how valuable it can be to rethink familiar, analog applications while thoughtfully extending them digitally. Finally, this thesis’ extensive work of engineering versatile research platforms is accompanied by overarching conceptual work, user evaluations and technical experiments, as well as literature reviews.Die Durchdringung digitaler Technologien im 21. Jahrhundert schreitet stetig voran und neue Geräteklassen wie Tablets, Smartphones oder Smartwatches erobern unseren Alltag. Diese Entwicklung birgt aber auch Probleme, denn die vorherrschenden berührungsempfindlichen Oberflächen berücksichtigen kaum haptische Qualitäten und erfordern daher die volle Aufmerksamkeit ihrer Nutzer:innen. Im Vergleich zu traditionellen Werkzeugen und analogen Schnittstellen bleiben die menschlichen Fähigkeiten ungenutzt, die Umwelt mit allen Sinnen zu begreifen und wahrzunehmen. Um das Beste aus beiden Welten zu vereinen, stellt sich daher die Frage, wie neuartige hybride Schnittstellen sinnvoll gestaltet und realisiert werden können, um die materielle und die digitale Welt zu verschmelzen. In der Forschung zu Tangible User Interfaces (TUIs) wird die Verbindung zwischen physischen Objekten und virtuellen Daten untersucht. Noch nicht ausreichend erforscht wurden hingegen hybride Schnittstellen, die speziell darauf abzielen, physische Gegenstände des Alltags digital zu erweitern und anhand geeigneter Designparameter und Entwurfsräume systematisch zu untersuchen. In dieser Dissertation wird daher untersucht, wie Materialität und Digitalität nahtlos ineinander übergehen können. Es soll erforscht werden, wie künftige Benutzungsschnittstellen nützliche digitale Funktionen bereitstellen können, ohne ihre physischen Eigenschaften und vertrauten Nutzungsmuster in der realen Welt zu verlieren. Die Entwicklung solcher hybriden Ansätze wirft jedoch übergreifende Forschungsfragen zum Design auf: Welche Arten von physischen Schnittstellen sind es wert, betrachtet zu werden? Welche Art von digitaler Erweiterung verbessert das Bestehende? Wie können hybride Konzepte ihre physischen Eigenschaften beibehalten und gleichzeitig neue digitale Funktionen ermöglichen? Was sind geeignete Methoden, um verschiedene Designs zu erforschen? Wie kann man Technologiebegeisterte bei der Erstellung von Prototypen unterstützen? Für eine systematische Untersuchung stützt sich die Arbeit auf einen designorientierten, explorativen und iterativen Entwicklungsprozess unter Verwendung digitaler Fabrikationsmethoden und neuartiger Materialien. Im Hauptteil werden vier Forschungsprojekte vorgestellt, die verschiedene visuelle und interaktive Prinzipien entlang realer Anwendungen diskutieren. Die Szenarien reichen von digital angereichertem Papier, interaktiven Kordeln über visuelle Erweiterungen von Uhrarmbändern bis hin zu neuartigen Prototyping-Tools für intelligente Kleidungsstücke. Um neue Designansätze aufzuzeigen, integrieren nahezu alle visuelles Feedback und haptische Eingaben, um Alternativen zu Standard-Eingabemodalitäten auf starren Pixelbildschirmen zu schaffen. Die Dissertation hat gezeigt, wie wertvoll es sein kann, bekannte, analoge Anwendungen zu überdenken und sie dabei gleichzeitig mit Bedacht digital zu erweitern. Dabei umfasst die vorliegende Arbeit sowohl realisierte technische Forschungsplattformen als auch übergreifende konzeptionelle Arbeiten, Nutzerstudien und technische Experimente sowie die Analyse existierender Forschungsarbeiten
    corecore