4,778 research outputs found

    A Framework for the Semantics-aware Modelling of Objects

    Get PDF
    The evolution of 3D visual content calls for innovative methods for modelling shapes based on their intended usage, function and role in a complex scenario. Even if different attempts have been done in this direction, shape modelling still mainly focuses on geometry. However, 3D models have a structure, given by the arrangement of salient parts, and shape and structure are deeply related to semantics and functionality. Changing geometry without semantic clues may invalidate such functionalities or the meaning of objects or their parts. We approach the problem by considering semantics as the formalised knowledge related to a category of objects; the geometry can vary provided that the semantics is preserved. We represent the semantics and the variable geometry of a class of shapes through the parametric template: an annotated 3D model whose geometry can be deformed provided that some semantic constraints remain satisfied. In this work, we design and develop a framework for the semantics-aware modelling of shapes, offering the user a single application environment where the whole workflow of defining the parametric template and applying semantics-aware deformations can take place. In particular, the system provides tools for the selection and annotation of geometry based on a formalised contextual knowledge; shape analysis methods to derive new knowledge implicitly encoded in the geometry, and possibly enrich the given semantics; a set of constraints that the user can apply to salient parts and a deformation operation that takes into account the semantic constraints and provides an optimal solution. The framework is modular so that new tools can be continuously added. While producing some innovative results in specific areas, the goal of this work is the development of a comprehensive framework combining state of the art techniques and new algorithms, thus enabling the user to conceptualise her/his knowledge and model geometric shapes. The original contributions regard the formalisation of the concept of annotation, with attached properties, and of the relations between significant parts of objects; a new technique for guaranteeing the persistence of annotations after significant changes in shape's resolution; the exploitation of shape descriptors for the extraction of quantitative information and the assessment of shape variability within a class; and the extension of the popular cage-based deformation techniques to include constraints on the allowed displacement of vertices. In this thesis, we report the design and development of the framework as well as results in two application scenarios, namely product design and archaeological reconstruction

    Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces

    Get PDF
    The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher

    An ontology-based approach towards coupling task and path planning for the simulation of manipulation tasks

    Get PDF
    This work deals with the simulation and the validation of complex manipulation tasks under strong geometric constraints in virtual environments. The targeted applications relate to the industry 4.0 framework; as up-to-date products are more and more integrated and the economic competition increases, industrial companies express the need to validate, from design stage on, not only the static CAD models of their products but also the tasks (e.g., assembly or maintenance) related to their Product Lifecycle Management (PLM). The scientific community looked at this issue from two points of view: - Task planning decomposes a manipulation task to be realized into a sequence of primitive actions (i.e., a task plan) - Path planning computes collision-free trajectories, notably for the manipulated objects. It traditionally uses purely geometric data, which leads to classical limitations (possible high computational processing times, low relevance of the proposed trajectory concerning the task to be performed, or failure); recent works have shown the interest of using higher abstraction level data. Joint task and path planning approaches found in the literature usually perform a classical task planning step, and then check out the feasibility of path planning requests associated with the primitive actions of this task plan. The link between task and path planning has to be improved, notably because of the lack of loopback between the path planning level and the task planning level: - The path planning information used to question the task plan is usually limited to the motion feasibility where richer information such as the relevance or the complexity of the proposed path would be needed; - path planning queries traditionally use purely geometric data and/or “blind” path planning methods (e.g., RRT), and no task-related information is used at the path planning level Our work focuses on using task level information at the path planning level. The path planning algorithm considered is RRT; we chose such a probabilistic algorithm because we consider path planning for the simulation and the validation of complex tasks under strong geometric constraints. We propose an ontology-based approach to use task level information to specify path planning queries for the primitive actions of a task plan. First, we propose an ontology to conceptualize the knowledge about the 3D environment in which the simulated task takes place. The environment where the simulated task takes place is considered as a closed part of 3D Cartesian space cluttered with mobile/fixed obstacles (considered as rigid bodies). It is represented by a digital model relying on a multilayer architecture involving semantic, topologic and geometric data. The originality of the proposed ontology lies in the fact that it conceptualizes heterogeneous knowledge about both the obstacles and the free space models. Second, we exploit this ontology to automatically generate a path planning query associated to each given primitive action of a task plan. Through a reasoning process involving the primitive actions instantiated in the ontology, we are able to infer the start and the goal configurations, as well as task-related geometric constraints. Finally, a multi-level path planner is called to generate the corresponding trajectory. The contributions of this work have been validated by full simulation of several manipulation tasks under strong geometric constraints. The results obtained demonstrate that using task-related information allows better control on the RRT path planning algorithm involved to check the motion feasibility for the primitive actions of a task plan, leading to lower computational time and more relevant trajectories for primitive actions
    • …
    corecore