274,994 research outputs found

    Hybrid shear-warp rendering

    Get PDF
    Shear-warp rendering is a fast and efficient method for visualizing a volume of sampled data based on a factorization of the viewing transformation into a shear and a warp. In shear-warp rendering, the volume is resampled, composited and warped to obtain the final image. Many applications, however, require a mixture of polygonal and volumetric data to be rendered together in a single image. This paper describes a new approach for extending the shear-warp rendering to simultaneously handle polygonal objects. A data structure, the zlist-buffe, is presented. It is basically a multilayered z-buffer. With the zlist-buffer, an object-based scan conversion of polygons requires only a simple modification of the standard polygon scan-conversion algorithm. This paper shows how the scan conversion can be integrated with shear-warp rendering of run-length encoded volume data to obtain quality images in real time. The utility and performance of the approach using a number of test renderings is also discussed

    Implementing imperfect information in fuzzy databases

    Get PDF
    Information in real-world applications is often vague, imprecise and uncertain. Ignoring the inherent imperfect nature of real-world will undoubtedly introduce some deformation of human perception of real-world and may eliminate several substantial information, which may be very useful in several data-intensive applications. In database context, several fuzzy database models have been proposed. In these works, fuzziness is introduced at different levels. Common to all these proposals is the support of fuzziness at the attribute level. This paper proposes first a rich set of data types devoted to model the different kinds of imperfect information. The paper then proposes a formal approach to implement these data types. The proposed approach was implemented within a relational object database model but it is generic enough to be incorporated into other database models.ou

    When Things Matter: A Data-Centric View of the Internet of Things

    Full text link
    With the recent advances in radio-frequency identification (RFID), low-cost wireless sensor devices, and Web technologies, the Internet of Things (IoT) approach has gained momentum in connecting everyday objects to the Internet and facilitating machine-to-human and machine-to-machine communication with the physical world. While IoT offers the capability to connect and integrate both digital and physical entities, enabling a whole new class of applications and services, several significant challenges need to be addressed before these applications and services can be fully realized. A fundamental challenge centers around managing IoT data, typically produced in dynamic and volatile environments, which is not only extremely large in scale and volume, but also noisy, and continuous. This article surveys the main techniques and state-of-the-art research efforts in IoT from data-centric perspectives, including data stream processing, data storage models, complex event processing, and searching in IoT. Open research issues for IoT data management are also discussed

    The supervised IBP: neighbourhood preserving infinite latent feature models

    Get PDF
    We propose a probabilistic model to infer supervised latent variables in the Hamming space from observed data. Our model allows simultaneous inference of the number of binary latent variables, and their values. The latent variables preserve neighbourhood structure of the data in a sense that objects in the same semantic concept have similar latent values, and objects in different concepts have dissimilar latent values. We formulate the supervised infinite latent variable problem based on an intuitive principle of pulling objects together if they are of the same type, and pushing them apart if they are not. We then combine this principle with a flexible Indian Buffet Process prior on the latent variables. We show that the inferred supervised latent variables can be directly used to perform a nearest neighbour search for the purpose of retrieval. We introduce a new application of dynamically extending hash codes, and show how to effectively couple the structure of the hash codes with continuously growing structure of the neighbourhood preserving infinite latent feature space

    Extending ACL2 with SMT Solvers

    Full text link
    We present our extension of ACL2 with Satisfiability Modulo Theories (SMT) solvers using ACL2's trusted clause processor mechanism. We are particularly interested in the verification of physical systems including Analog and Mixed-Signal (AMS) designs. ACL2 offers strong induction abilities for reasoning about sequences and SMT complements deduction methods like ACL2 with fast nonlinear arithmetic solving procedures. While SAT solvers have been integrated into ACL2 in previous work, SMT methods raise new issues because of their support for a broader range of domains including real numbers and uninterpreted functions. This paper presents Smtlink, our clause processor for integrating SMT solvers into ACL2. We describe key design and implementation issues and describe our experience with its use.Comment: In Proceedings ACL2 2015, arXiv:1509.0552

    Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

    Get PDF
    In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructing their 3D shape in real time. We use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.Comment: International Conference on Robotics and Automation (ICRA) 2017, http://visual.cs.ucl.ac.uk/pubs/cofusion, https://github.com/martinruenz/co-fusio
    corecore