3 research outputs found

    Scalable Exploration of Complex Objects and Environments Beyond Plain Visual Replication​

    Get PDF
    Digital multimedia content and presentation means are rapidly increasing their sophistication and are now capable of describing detailed representations of the physical world. 3D exploration experiences allow people to appreciate, understand and interact with intrinsically virtual objects. Communicating information on objects requires the ability to explore them under different angles, as well as to mix highly photorealistic or illustrative presentations of the object themselves with additional data that provides additional insights on these objects, typically represented in the form of annotations. Effectively providing these capabilities requires the solution of important problems in visualization and user interaction. In this thesis, I studied these problems in the cultural heritage-computing-domain, focusing on the very common and important special case of mostly planar, but visually, geometrically, and semantically rich objects. These could be generally roughly flat objects with a standard frontal viewing direction (e.g., paintings, inscriptions, bas-reliefs), as well as visualizations of fully 3D objects from a particular point of views (e.g., canonical views of buildings or statues). Selecting a precise application domain and a specific presentation mode allowed me to concentrate on the well defined use-case of the exploration of annotated relightable stratigraphic models (in particular, for local and remote museum presentation). My main results and contributions to the state of the art have been a novel technique for interactively controlling visualization lenses while automatically maintaining good focus-and-context parameters, a novel approach for avoiding clutter in an annotated model and for guiding users towards interesting areas, and a method for structuring audio-visual object annotations into a graph and for using that graph to improve guidance and support storytelling and automated tours. We demonstrated the effectiveness and potential of our techniques by performing interactive exploration sessions on various screen sizes and types ranging from desktop devices to large-screen displays for a walk-up-and-use museum installation. KEYWORDS - Computer Graphics, Human-Computer Interaction, Interactive Lenses, Focus-and-Context, Annotated Models, Cultural Heritage Computing

    Spatial CPU-GPU data structures for interactive rendering of large particle data

    Get PDF
    In this work, I investigate the interactive visualization of arbitrarily large particle data sets which ft into system memory, but not into GPU memory. With conventional rendering techniques, interactivity of visualizations is drastically reduced when rendering tens- or hundreds of millions of objects. At the same time, graphics hardware memory capabilities limit the size of data sets which can be placed in GPU memory for rendering. To circumvent these obstacles, a progressive rendering approach is employed, which gradually streams and renders all particle data to the GPU without reducing or altering the particle data itself. The particle data is rendered according to a visibility sorting derived from occlusion relations between different parts of the data set, leading to a rendering order of scene contents guided by importance for the rendered image. I analyze and compare possible implementation choices for rendering particles as opaque spheres in OpenGL, which forms the basis of the particle rendering application developed within this work. The application utilizes a multi-threaded architecture, where data preprocessing on a CPU-thread and a rendering algorithm on a GPU-thread ensure that the user can interact with the application at any time. In particular it is guaranteed that the user can explore the particle data interactively, by ensuring minimal latency from user input to seeing the effects of that input. This is achieved by favoring user inputs over completeness of the rendered image at all stages during rendering. At the same time the user is provided with an immediate feedback about interactions by re-projecting all currently visible particles to the next rendered image. The re-projection is realized with an on-GPU particle-cache of visible particles that is built during particle data streaming and rendering, and drawn upon user interaction using the most recent camera confguration according to user inputs. The combination of the developed techniques allows interactive exploration of particle data sets with up to 1.5 billion particles on a commodity computer.In dieser Arbeit wird die interaktive Visualisierung beliebig großer Partikeldaten untersucht, wobei die Partikeldaten im Arbeitsspeicher hinterlegt sind, aber nicht zwangslĂ€ufig in den Grafikspeicher passen. Mit ĂŒblichen Rendering Methoden bĂŒĂŸen Visualisierungen drastisch an InteraktivitĂ€t ein, wenn mehrere zehn- bis hunderte Millionen Objekte dargestellt werden. Gleichzeitig ist die GrĂ¶ĂŸe möglicher zu visualisierender DatensĂ€tze begrenzt durch den Videospeicher von Grafikkarten, auf dem zu visualisierende Daten vorliegen mĂŒssen. Um diese EinschrĂ€nkungen zu umgehen, wird in dieser Arbeit ein progressiver Rendering Ansatz verfolgt, der sukzessive alle Partikeldaten zur Grafikkarte hochlĂ€dt und rendert, ohne die Partikeldaten zu reduzieren oder anderweitig zu verĂ€ndern. Die Partikeldaten werden entsprechend einer vorgenommenen Sichtbarkeitssortierung gerendert, die aus gegenseitigen Verdeckungen verschiedener Teile des Partikeldatensatzes berechnet wird. Dies fĂŒhrt dazu, dass Teile der Szene nach ihrer Wichtigkeit fĂŒr das aktuelle Bild sortiert und dargestellt werden. Es werden verschiedene Möglichkeiten analysiert und verglichen, Partikel als opake Kugeln in OpenGL zu rendern. Dies formt die Grundlage fĂŒr die Partikel-Rendering Software, die in dieser Arbeit entwickelt wurde. Die Architektur der Rendering-Software benutzt mehrere Threads, sodass durch eine Daten-Vorverarbeitung auf einem CPUThread und durch Rendering-Algorithmen auf einem GPU-Thread sichergestellt ist, dass der Benutzer mit der Software jederzeit interagieren kann. Insbesondere ist sichergestellt, dass der Benutzer die Partikeldaten interaktiv untersuchen kann, indem die Latenz zwischen Benutzereingaben und dem Anzeigen der daraus resultierenden VerĂ€nderungen minimal gehalten wird. Dies wird erreicht indem der Verarbeitung von Benutzereingaben an allen Stellen des Rendering-Prozesses höhere PrioritĂ€t eingerĂ€umt wird als der VollstĂ€ndigkeit des gerenderten Bildes. Gleichzeitig wird dem Benutzer eine sofortige RĂŒckmeldung ĂŒber getĂ€tigte Benutzereingaben gegeben, indem alle sichtbaren Partikel in das nĂ€chste gerenderte Bild neu projeziert werden. Diese Neu-Projektion wird durch einen GPU-seitigen Partikel-Cache aller aktuell sichtbaren Partikel realisiert, der wĂ€hrend des sukzessiven Partikelstreamings und -renderns aufgebaut wird. Sobald der Benutzer eine Eingabe tĂ€tigt, wird der auf der GPU liegende Partikel-Cache unter der aktuellsten benutzerdefinierten Kameraposition neu gerendert. Die Kombination dieser entwickelten Methoden erlaubt ein interaktives Betrachten von Partikeldaten mit bis zu 1,5 Milliarden Partikeln auf einem handelsĂŒblichen Computer

    Women in Artificial intelligence (AI)

    Get PDF
    This Special Issue, entitled "Women in Artificial Intelligence" includes 17 papers from leading women scientists. The papers cover a broad scope of research areas within Artificial Intelligence, including machine learning, perception, reasoning or planning, among others. The papers have applications to relevant fields, such as human health, finance, or education. It is worth noting that the Issue includes three papers that deal with different aspects of gender bias in Artificial Intelligence. All the papers have a woman as the first author. We can proudly say that these women are from countries worldwide, such as France, Czech Republic, United Kingdom, Australia, Bangladesh, Yemen, Romania, India, Cuba, Bangladesh and Spain. In conclusion, apart from its intrinsic scientific value as a Special Issue, combining interesting research works, this Special Issue intends to increase the invisibility of women in AI, showing where they are, what they do, and how they contribute to developments in Artificial Intelligence from their different places, positions, research branches and application fields. We planned to issue this book on the on Ada Lovelace Day (11/10/2022), a date internationally dedicated to the first computer programmer, a woman who had to fight the gender difficulties of her times, in the XIX century. We also thank the publisher for making this possible, thus allowing for this book to become a part of the international activities dedicated to celebrating the value of women in ICT all over the world. With this book, we want to pay homage to all the women that contributed over the years to the field of AI
    corecore