16 research outputs found

    Towards Advanced Interactive Visualization for Virtual Atlases

    Get PDF
    Under embargo until: 2020-07-24An atlas is generally defined as a bound collection of tables, charts or illustrations describing a phenomenon. In an anatomical atlas for example, a collection of representative illustrations and text describes anatomy for the purpose of communicating anatomical knowledge. The atlas serves as reference frame for comparing and integrating data from different sources by spatially or semantically relating collections of drawings, imaging data, and/or text. In the field of medical image processing, atlas information is often constructed from a collection of regions of interest, which are based on medical images that are annotated by domain experts. Such an atlas may be employed, for example, for automatic segmentation of medical imaging data. The combination of interactive visualization techniques with atlas information opens up new possibilities for content creation, curation, and navigation in virtual atlases. With interactive visualization of atlas information, students are able to inspect and explore anatomical atlases in ways that were not possible with the traditional method of presenting anatomical atlases in book format, such as viewing the illustrations from other viewpoints. With advanced interaction techniques, it becomes possible to query the data that forms the basis for the atlas, thus empowering researchers to access a wealth of information in new ways. So far, atlas-based visualization has been employed mainly for medical education, as well as biological research. In this survey, we provide an overview of current digital biomedical atlas tasks and applications and summarize relevant visualization techniques. We discuss recent approaches for providing next-generation visual interfaces to navigate atlas data that go beyond common text-based search and hierarchical lists. Finally, we reflect on open challenges and opportunities for the next steps in interactive atlas visualization.acceptedVersio

    Collaboration Interactive en Réalité Virtuelle pour l'industrie Aérospatiale

    Get PDF
    National audienceContrairement Ă  ce que laissent penser un certain nombre « d’articles » publiĂ©s dans la presse ou les rĂ©seaux sociaux depuis quelques temps, la rĂ©alitĂ© virtuelle n’est pas nĂ©e ces derniĂšres annĂ©es avec l’apparition de casques Ă  coĂ»t rĂ©duit ; elle a par exemple pĂ©nĂ©trĂ© le monde de l’entreprise depuis longtemps. Elle est aujourd’hui utilisĂ©e de façon routiniĂšre notamment pour des activitĂ©s de conception et de fabrication. La complexitĂ© croissante de ces processus alliĂ©e Ă  la localisation sur plusieurs sites des experts des grandes entreprises a conduit Ă  s’interroger sur la « bonne façon » de conduire des expĂ©rimentations virtuelles regroupant plusieurs utilisateurs immergĂ©s via des systĂšmes diffĂ©rents et Ă©ventuellement gĂ©ographiquement distants. Cette problĂ©matique a fait l’objet de diffĂ©rents travaux de recherche depuis plusieurs annĂ©es mais il est facile de constater que ces rĂ©sultats sont encore peu (rĂ©)utilisĂ©s dans l’industrie.C’est pour tenter de faire progresser ce transfert que nous dĂ©veloppons avec Airbus Group un systĂšme de maquettage pour tester certaines de ces solutions dans leur contexte de travail. Nous avons donc dĂ©veloppĂ© une application immersive en fonction de leurs besoins spĂ©cifiques, puis mis en place des tests avec des vrais utilisateurs afin d’évaluer dans des conditions Ă©cologiques l’impact des rĂ©sultats acadĂ©miques. Ce projet est basĂ© sur une collaboration forte : du recueil d’informations sur les process actuels et futurs aux tests avec des end-users, en passant par le co-design de solutions avec leurs experts. Dans cet article, nous prĂ©sentons les premiĂšres Ă©tapes de ce travail en dĂ©crivant quelques-uns des outils d’interaction collaborative que nous avons dĂ©veloppĂ©s et les premiers tests utilisateurs que nous avons rĂ©alisĂ©s

    Show me insides:Investigating the influences of product exploded view on consumers’ mental imagery, comprehension, attitude, and purchase intention

    Get PDF
    With the popularity of e-commerce, consumers purchase durable products online more frequently. In order to effectively communicate information about products, various product presentation formats have been developed (e.g., 3D views, 360-degree rotatable views, and enlarged pictures). This study focuses on the usage of exploded view in digital product presentation. Exploded view clearly shows what technical components are included in a product, where these components are positioned, and how these technical components are assembled. Seeing internal components could facilitate consumers processing of product function and attributes, but it might be overwhelming for consumer processing. To understand how exploded view influences consumer processing, this study investigates the influence of product function description (concrete vs. abstract) and product view (normal vs. exploded view) on consumers' comprehension, attitude and purchase intention. Drawing on construct-level theory, we expect the construal congruence between product view and textual description makes positive effects. Through two controlled experiments (N = 815), results demonstrate that the influence of product view changes with abstractness of description. When processing concrete product descriptions, participants' imagery vividness was facilitated by exploded view, which further led to enhanced comprehension, improved attitude and higher purchased intention. When encountering abstract product descriptions, normal product view led to higher comprehension and attitude. Mental imagery vividness mediated the interaction influence between textual descriptions and pictorial views. These results provide guidelines for marketers’ effective usage of exploded views in e-commerce contexts.</p

    Using positional information to provide context for biological image analysis with MorphoGraphX 2.0

    Get PDF
    Positional information is a central concept in developmental biology. In developing organs, positional information can be idealized as a local coordinate system that arises from morphogen gradients controlled by organizers at key locations. This offers a plausible mechanism for the integration of the molecular networks operating in individual cells into the spatially-coordinated multicellular responses necessary for the organization of emergent forms. Understanding how positional cues guide morphogenesis requires the quantification of gene expression and growth dynamics in the context of their underlying coordinate systems. Here we present recent advances in the MorphoGraphX software (Barbier de Reuille et al., 2015)⁠ that implement a generalized framework to annotate developing organs with local coordinate systems. These coordinate systems introduce an organ-centric spatial context to microscopy data, allowing gene expression and growth to be quantified and compared in the context of the positional information thought to control them

    Augmented manual fabrication methods for 2D tool positioning and 3D sculpting

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 67-75).Augmented manual fabrication involves using digital technology to assist a user engaged in a manual fabrication task. Methods in this space aim to combine the abilities of a human operator, such as motion planning and large-range mechanical manipulation, with technological capabilities that compensate for the operator's areas of weakness, such as precise 3D sensing, manipulation of complex shape data, and millimeter-scale actuation. This thesis presents two new augmented manual fabrication methods. The first is a method for helping a sculptor create an object that precisely matches the shape of a digital 3D model. In this approach, a projector-camera pair is used to scan a sculpture in progress, and the resulting scan data is compared to the target 3D model. The system then computes the changes necessary to bring the physical sculpture closer to the target 3D shape, and projects guidance directly onto the sculpture that indicates where and how the sculpture should be changed, such as by adding or removing material. We describe multiple types of guidance that can be used to direct the sculptor, as well as several related applications of this technique. The second method described in this thesis is a means of precisely positioning a handheld tool on a sheet of material using a hybrid digital-manual approach. An operator is responsible for manually moving a frame containing the tool to the approximate neighborhood of the desired position. The device then detects the frame's position and uses digitally-controlled actuators to move the tool within the frame to the exact target position. By doing this in a real time feedback loop, a tool can be smoothly moved along a digitally-specified 2D path, allowing many types of digital fabrication over an unlimited range using an inexpensive handheld tool.by Alec Rivers.Ph.D

    Data-driven methods for interactive visual content creation and manipulation

    Get PDF
    Software tools for creating and manipulating visual content --- be they for images, video or 3D models --- are often difficult to use and involve a lot of manual interaction at several stages of the process. Coupled with long processing and acquisition times, content production is rather costly and poses a potential barrier to many applications. Although cameras now allow anyone to easily capture photos and video, tools for manipulating such media demand both artistic talent and technical expertise. However, at the same time, vast corpuses with existing visual content such as Flickr, YouTube or Google 3D Warehouse are now available and easily accessible. This thesis proposes a data-driven approach to tackle the above mentioned problems encountered in content generation. To this end, statistical models trained on semantic knowledge harvested from existing visual content corpuses are created. Using these models, we then develop tools which are easy to learn and use, even by novice users, but still produce high-quality content. These tools have intuitive interfaces, and enable the user to have precise and flexible control. Specifically, we apply our models to create tools to simplify the tasks of video manipulation, 3D modeling and material assignment to 3D objects.Softwarewerkzeuge zum Erstellen und Bearbeiten von visuellen Inhalten --- seien es Bilder, Videos oder 3D-Modelle --- sind hĂ€ufig schwierig zu bedienen und erfordern viel manuelle Interaktion an verschiedenen Stellen des Verfahrens. In Verbindung mit langen Bearbeitungs- und Erfassungszeiten ist die Erzeugung von Inhalten eher aufwendig und stellt ein potentielles Hindernis fĂŒr viele Anwendungen dar. Obwohl heute Kameras jedem Anwender auf einfache Art und Weise erlauben Bilder und Videos aufzunehmen, erfordern Werkzeuge zur Bearbeitung dieser sowohl kĂŒnstlerisches Talent, als auch technische Kompetenz. Gleichzeitig sind riesige Korpora mit bereits vorhandenen visuellen Inhalten, wie zum Beispiel Flickr, Youtube oder Google 3D Warehouse, verfĂŒgbar und leicht zugĂ€nglich. Diese Arbeit stellt einen datengetriebenen Ansatz vor, der die erwĂ€hnten Probleme der Inhaltserzeugung behandelt. Zu diesem Zweck werden statistische Modelle erzeugt, die auf semantischem Wissen trainiert worden sind, welches aus bestehenden Korpora von visuellen Inhalten gesammelt worden ist. Durch die Verwendung dieser Modelle ist es möglich Werkzeuge zu entwickeln, die sogar von unerfahrenen Anwendern einfach zu erlernen und zu benutzen sind, aber dennoch qualitativ hochwertige Inhalte produzieren. Diese Werkzeuge haben intuitive BenutzeroberflĂ€chen und geben dem Benutzer eine prĂ€zise und flexible Kontrolle. Insbesondere werden die Modelle eingesetzt, um Werkzeuge zu erzeugen, die Aufgaben Videobearbeitung, 3D-Modellerstellung und Materialzuweisung zu 3D-Modellen vereinfachen
    corecore