14 research outputs found

    Navigation and interaction in a real-scale digital mock-up using natural language and user gesture

    Get PDF
    This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.FUI CALLISTO-SAR

    Heterogeneous Skeleton for Summarizing Continuously Distributed Demand in a Region

    Get PDF
    There has long been interest in the skeleton of a spatial object in GIScience. The reasons for this are many, as it has proven to be an extremely useful summary and explanatory representation of complex objects. While much research has focused on issues of computational complexity and efficiency in extracting the skeletal and medial axis representations as well as interpreting the final product, little attention has been paid to fundamental assumptions about the underlying object. This paper discusses the implied assumption of homogeneity associated with methods for deriving a skeleton. Further, it is demonstrated that addressing heterogeneity complicates both the interpretation and identification of a meaningful skeleton. The heterogeneous skeleton is introduced and formalized, along with a method for its identification. Application results are presented to illustrate the heterogeneous skeleton and provides comparative contrast to homogeneity assumptions

    Deformations Preserving Gauß Curvature

    Get PDF
    (Proceedings of LHMTS 2013)International audienceIn industrial surface generation, it is important to consider surfaces with minimal areas for two main reasons: these surfaces require less material than non-minimal surfaces, and they are cheaper to manufacture. Based on a prototype, a so-called masterpiece, the final product is created using small deformations to adapt a surface to the desired shape. We present a linear deformation technique preserving the total curvature of the masterpiece. In particular, we derive sufficient conditions for these linear deformations to be total curvature preserving when applied to the masterpiece. It is useful to preserve total curvature of a surface in order to minimise the amount of material needed, and to minimise bending energy

    Towards Expressive and Versatile Visualization-as-a-Service (VaaS)

    Get PDF
    The rapid growth of data in scientific visualization has posed significant challenges to the scalability and availability of interactive visualization tools. These challenges can be largely attributed to the limitations of traditional monolithic applications in handling large datasets and accommodating multiple users or devices. To address these issues, the Visualization-as-a-Service (VaaS) architecture has emerged as a promising solution. VaaS leverages cloud-based visualization capabilities to provide on-demand and cost-effective interactive visualization. Existing VaaS has been simplistic by design with focuses on task-parallelism with single-user-per-device tasks for predetermined visualizations. This dissertation aims to extend the capabilities of VaaS by exploring data-parallel visualization services with multi-device support and hypothesis-driven explorations. By incorporating stateful information and enabling dynamic computation, VaaS\u27 performance and flexibility for various real-world applications is improved. This dissertation explores the history of monolithic and VaaS architectures, the design and implementations of 3 new VaaS applications, and a final exploration of the future of VaaS. This research contributes to the advancement of interactive scientific visualization, addressing the challenges posed by large datasets and remote collaboration scenarios

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    Supporting Focus and Context Awareness in 3D Modeling Using Multi-Layered Displays

    Get PDF
    Although advances in computer technology over the past few decades have made it possible to create and render highly realistic 3D models these days, the process of creating these models has remained largely unchanged over the years. Modern 3D modeling software provide a range of tools to assist users with creating 3D models, but the process of creating models in virtual 3D space is nevertheless still challenging and cumbersome. This thesis, therefore, aims to investigate whether it is possible to support modelers more effectively by providing them with alternative combinations of hardware and software tools to improve their 3D modeling tasks. The first step towards achieving this goal has been to better understand the type of problems modelers face in using conventional 3D modeling software. To achieve this, a pilot study of novice 3D modelers, and a more comprehensive study of professional modelers were conducted. These studies resulted in identifying a range of focus and context awareness problems that modelers face in creating complex 3D models using conventional modeling software. These problems can be divided into four categories: maintaining position awareness, identifying and selecting objects or components of interest, recognizing the distance between objects or components, and realizing the relative position of objects or components. Based on the above categorization, five focus and context awareness techniques were developed for a multi-layer computer display to enable modelers to better maintain their focus and context awareness while performing 3D modeling tasks. These techniques are: object isolation, component segregation, peeling focus, slicing, and peeling focus and context. A user study was then conducted to compare the effectiveness of these focus and context awareness techniques with other tools provided by conventional 3D modeling software. The results of this study were used to further improve, and evaluate through a second study, the five focus and context awareness techniques. The two studies have demonstrated that some of these techniques are more effective in supporting 3D modeling tasks than other existing software tools

    Dermal Radiomics: a new approach for computer-aided melanoma screening system

    Get PDF
    Skin cancer is the most common form of cancer in North America, and melanoma is the most dangerous type of skin cancer. Melanoma originates from melanocytes in the epidermis and has a high tendency to develop away from the skin surface and cause metastasis through the bloodstream. Early diagnosis is known to help improve survival rates. Under the current diagnosis, the initial examination of the potential melanoma patient is done via naked eye screening or standard photographic images of the lesion. From this, the accuracy of diagnosis varies depending on the expertise of the clinician. Radiomics is a recent cancer diagnostic tool that centers around the high throughput extraction of quantitative and mineable imaging features from medical images to identify tumor phenotypes. Radiomics focuses on optimizing a large number of features through computational approaches to develop a decision support system for improving individualized treatment selection and monitoring. While radiomics has shown great promise for screening and analyzing di erent forms of cancer such as lung cancer and prostate cancer, to the best of our knowledge, radiomics has not been previously adopted for skin cancer, especially melanoma. This work presents a dermal radiomics framework, which is a novel computer-aided melanoma diagnosis. While most computer-aided melanoma screening systems follow the conventional diagnostic scheme, the proposed work utilizes the physiological biomarker information. To extract physiological biomarkers, non-linear random forest inverse light-skin interaction model is proposed. The construction of dermal radiomics sequence is followed using the extracted physiological biomarkers, and the dermal radiomics framework for melanoma is completed by constructing diagnostic decision system based on random forest classi cation algorithm

    Discrete Geometric Methods for Surface Deformation and Visualisation

    Get PDF
    Industrial design has a long history. With the introduction of Computer-Aided Engineering, industrial design was revolutionised. Due to the newly found support, the design workflow changed, and with the introduction of virtual prototyping, new challenges arose. These new engineering problems have triggered new basic research questions in computer science. In this dissertation, I present a range of methods which support different components of the virtual design cycle, from modifications of a virtual prototype and optimisation of said prototype, to analysis of simulation results. Starting with a virtual prototype, I support engineers by supplying intuitive discrete normal vectors which can be used to interactively deform the control mesh of a surface. I provide and compare a variety of different normal definitions which have different strengths and weaknesses. The best choice depends on the specific model and on an engineer’s priorities. Some methods have higher accuracy, whereas other methods are faster. I further provide an automatic means of surface optimisation in the form of minimising total curvature. This minimisation reduces surface bending, and therefore, it reduces material expenses. The best results can be obtained for analytic surfaces, however, the technique can also be applied to real-world examples. Moreover, I provide engineers with a curvature-aware technique to optimise mesh quality. This helps to avoid degenerated triangles which can cause numerical issues. It can be applied to any component of the virtual design cycle: as a direct modification of the virtual prototype (depending on the surface defini- tion), during optimisation, or dynamically during simulation. Finally, I have developed two different particle relaxation techniques that both support two components of the virtual design cycle. The first component for which they can be used is discretisation. To run computer simulations on a model, it has to be discretised. Particle relaxation uses an initial sampling, and it improves it with the goal of uniform distances or curvature-awareness. The second component for which they can be used is the analysis of simulation results. Flow visualisation is a powerful tool in supporting the analysis of flow fields through the insertion of particles into the flow, and through tracing their movements. The particle seeding is usually uniform, e.g. for an integral surface, one could seed on a square. Integral surfaces undergo strong deformations, and they can have highly varying curvature. Particle relaxation redistributes the seeds on the surface depending on surface properties like local deformation or curvature.Industrielles Design ist ein traditionsreiches Gebiet, welches durch die EinfĂŒhrung computergestĂŒtzter Ingenieurwissenschaft revolutioniert wurde. Durch die ComputerunterstĂŒtzung wurden Arbeitsweisen stark verĂ€ndert und, im Rahmen der EinfĂŒhrung virtueller Prototypen, neue Herausforderungen geschaffen. Diese neuen Herausforderungen im Ingenieurwesen brachten auch neue Forschungsfragen fĂŒr die Informatik mit sich. In dieser Dissertation prĂ€sentiere ich eine Reihe von Methoden, welche verschiedene Komponenten des virtuellen Designzykluses unterstĂŒtzen, von der Modifikation von virtuellen Prototypen ĂŒber ihre Optimierung bis hin zur Analyse von Simulationstechniken. Von einem virtuellen Prototypen ausgehend unterstĂŒtze ich Ingenieurinnen und Ingenieure dabei, diesen interaktiv mit Hilfe eines Kontrollgitters zu verformen, indem ich intuitive diskrete Normalenvektoren bereitstelle. Ich definiere und vergleiche unterschiedliche Normalendefinitionen, welche verschiedene StĂ€rken und SchwĂ€chen besitzen. Die beste Wahl hĂ€ngt hierbei davon ab, welche Eigenschaften ein Modell hat, und wie die PrioritĂ€ten der Ingenieurin oder des Ingenieurs gesetzt sind. Manche Methoden sind exakter, wĂ€hrend andere schneller sind. Weiterhin liefere ich eine automatische FlĂ€chenoptimierung durch die Minimierung der GesamtkrĂŒmmung einer FlĂ€che. Diese Minimierung reduziert die FlĂ€chenkrĂŒmmung und reduziert hierdurch Materialkosten. Die besten Resultate lassen sich fĂŒr analytische FlĂ€chen erzielen, jedoch kann die Technik auch auf praxisnahe Beispiele angewendet werden. DarĂŒber hinaus biete ich Ingenieurinnen und Ingenieuren eine krĂŒmmungs\-basierte Technik, um die QualitĂ€t von Gittern zu verbessern. Dies vermeidet degenerierte Dreiecke, welche sonst zu numerischen Problemen fĂŒhren wĂŒrden. Diese Optimierung kann in jedem Schritt des virtuellen Designzykluses angewendet werden: Als direkte Modifikation des virtuellen Prototyps (abhĂ€ngig von der FlĂ€chendefinition), zur Zeit der Optimierung, oder dynamisch wĂ€hrend der Simulation. Schlussendlich habe ich zwei verschiedene Partikelrelaxierungstechniken entwickelt, welche jeweils zwei Komponenten des virtuellen Designzykluses unterstĂŒtzen. Die erste Komponente ist die Diskretisierung, welche nötig ist, um Computersimulationen auf analytischen Modellen durchzufĂŒhren. Die Partikelrelaxierung nutzt eine initiale Verteilung und verbessert diese mit dem Ziel, entweder eine Gleichverteilung auf der FlĂ€che oder eine krĂŒmmungsabhĂ€ngige Verteilung zu erreichen. Die zweite Komponente, fĂŒr die die Relaxierung genutzt werden kann, ist die Analyse der Simulationsergebnisse. Vektorfeldvisualisierung ist ein mĂ€chtiges Werkzeug zur Analyse von Flussfeldern durch Verfolgen von Partikeln, die in einen Fluss hinzugegeben wurden. Üblicherweise beginnt eine solche Simulation mit einer gleichverteilten Menge an Partikeln, die z.B. in einem Quadrat angeordnet sind. Die daraus entstehenden IntegralflĂ€chen unterliegen starken Deformierungen und können sehr variable KrĂŒmmungen haben. Die Partikelrelaxierung verteilt die Partikel abhĂ€ngig von Kriterien wie lokaler Deformierung oder KrĂŒmmung auf der FlĂ€che um

    Bidirectional Texture Functions: Acquisition, Rendering and Quality Evaluation

    Get PDF
    As one of its primary objectives, Computer Graphics aims at the simulation of fabrics’ complex reflection behaviour. Characteristic surface reflectance of fabrics, such as highlights, anisotropy or retro-reflection arise the difficulty of synthesizing. This problem can be solved by using Bidirectional Texture Functions (BTFs), a 2D-texture under various light and view direction. But the acquisition of Bidirectional Texture Functions requires an expensive setup and the measurement process is very time-consuming. Moreover, the size of BTF data can range from hundreds of megabytes to several gigabytes, as a large number of high resolution pictures have to be used in any ideal cases. Furthermore, the three-dimensional textured models rendered through BTF rendering method are subject to various types of distortion during acquisition, synthesis, compression, and processing. An appropriate image quality assessment scheme is a useful tool for evaluating image processing algorithms, especially algorithms designed to leave the image visually unchanged. In this contribution, we present and conduct an investigation aimed at locating a robust threshold for downsampling BTF images without loosing perceptual quality. To this end, an experimental study on how decreasing the texture resolution influences perceived quality of the rendered images has been presented and discussed. Next, two basic improvements to the use of BTFs for rendering are presented: firstly, the study addresses the cost of BTF acquisition by introducing a flexible low-cost step motor setup for BTF acquisition allowing to generate a high quality BTF database taken at user-defined arbitrary angles. Secondly, the number of acquired textures to the perceptual quality of renderings is adapted so that the database size is not overloaded and can fit better in memory when rendered. Although visual attention is one of the essential attributes of HVS, it is neglected in most existing quality metrics. In this thesis an appropriate objective quality metric based on extracting visual attention regions from images and adequate investigation of the influence of visual attention on perceived image quality assessment, called Visual Attention Based Image Quality Metric (VABIQM), has been proposed. The novel metric indicates that considering visual saliency can offer significant benefits with regard to constructing objective quality metrics to predict the visible quality differences in images rendered by compressed and non-compressed BTFs and also outperforms straightforward existing image quality metrics at detecting perceivable differences
    corecore