3,800 research outputs found

    An in-depth study of graph partitioning measures for perceptual organization

    Full text link

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Part Description and Segmentation Using Contour, Surface and Volumetric Primitives

    Get PDF
    The problem of part definition, description, and decomposition is central to the shape recognition systems. The Ultimate goal of segmenting range images into meaningful parts and objects has proved to be very difficult to realize, mainly due to the isolation of the segmentation problem from the issue of representation. We propose a paradigm for part description and segmentation by integration of contour, surface, and volumetric primitives. Unlike previous approaches, we have used geometric properties derived from both boundary-based (surface contours and occluding contours), and primitive-based (quadric patches and superquadric models) representations to define and recover part-whole relationships, without a priori knowledge about the objects or object domain. The object shape is described at three levels of complexity, each contributing to the overall shape. Our approach can be summarized as answering the following question : Given that we have all three different modules for extracting volume, surface and boundary properties, how should they be invoked, evaluated and integrated? Volume and boundary fitting, and surface description are performed in parallel to incorporate the best of the coarse to fine and fine to coarse segmentation strategy. The process involves feedback between the segmentor (the Control Module) and individual shape description modules. The control module evaluates the intermediate descriptions and formulates hypotheses about parts. Hypotheses are further tested by the segmentor and the descriptors. The descriptions thus obtained are independent of position, orientation, scale, domain and domain properties, and are based purely on geometric considerations. They are extremely useful for the high level domain dependent symbolic reasoning processes, which need not deal with tremendous amount of data, but only with a rich description of data in terms of primitives recovered at various levels of complexity

    Multigranularity Representations for Human Inter-Actions: Pose, Motion and Intention

    Get PDF
    Tracking people and their body pose in videos is a central problem in computer vision. Standard tracking representations reason about temporal coherence of detected people and body parts. They have difficulty tracking targets under partial occlusions or rare body poses, where detectors often fail, since the number of training examples is often too small to deal with the exponential variability of such configurations. We propose tracking representations that track and segment people and their body pose in videos by exploiting information at multiple detection and segmentation granularities when available, whole body, parts or point trajectories. Detections and motion estimates provide contradictory information in case of false alarm detections or leaking motion affinities. We consolidate contradictory information via graph steering, an algorithm for simultaneous detection and co-clustering in a two-granularity graph of motion trajectories and detections, that corrects motion leakage between correctly detected objects, while being robust to false alarms or spatially inaccurate detections. We first present a motion segmentation framework that exploits long range motion of point trajectories and large spatial support of image regions. We show resulting video segments adapt to targets under partial occlusions and deformations. Second, we augment motion-based representations with object detection for dealing with motion leakage. We demonstrate how to combine dense optical flow trajectory affinities with repulsions from confident detections to reach a global consensus of detection and tracking in crowded scenes. Third, we study human motion and pose estimation. We segment hard to detect, fast moving body limbs from their surrounding clutter and match them against pose exemplars to detect body pose under fast motion. We employ on-the-fly human body kinematics to improve tracking of body joints under wide deformations. We use motion segmentability of body parts for re-ranking a set of body joint candidate trajectories and jointly infer multi-frame body pose and video segmentation. We show empirically that such multi-granularity tracking representation is worthwhile, obtaining significantly more accurate multi-object tracking and detailed body pose estimation in popular datasets

    The effects of belongingness on the Simultaneous Lightness Contrast: A virtual reality study

    Get PDF
    Simultaneous Lightness Contrast (SLC) is the phenomenon whereby a grey patch on a dark background appears lighter than an equal patch on a light background. Interestingly, the lightness difference between these patches undergoes substantial augmentation when the two backgrounds are patterned, thereby forming the articulated-SLC display. There are two main interpretations of these phenomena: The midlevel interpretation maintains that the visual system groups the luminance within a set of contiguous frameworks, whilst the high-level one claims that the visual system splits the luminance into separate overlapping layers corresponding to separate physical contributions. This research aimed to test these two interpretations by systematically manipulating the viewing distance and the horizontal distance between the backgrounds of both the articulated and plain SLC displays. An immersive 3D Virtual Reality system was employed to reproduce identical alignment and distances, as well as isolating participants from interfering luminance. Results showed that reducing the viewing distance resulted in increased contrast in both the plain- and articulated-SLC displays and that, increasing the horizontal distance between the backgrounds resulted in decreased contrast in the articulated condition but increased contrast in the plain condition. These results suggest that a comprehensive lightness theory should combine the two interpretations

    A Computational Model of the Short-Cut Rule for 2D Shape Decomposition

    Full text link
    We propose a new 2D shape decomposition method based on the short-cut rule. The short-cut rule originates from cognition research, and states that the human visual system prefers to partition an object into parts using the shortest possible cuts. We propose and implement a computational model for the short-cut rule and apply it to the problem of shape decomposition. The model we proposed generates a set of cut hypotheses passing through the points on the silhouette which represent the negative minima of curvature. We then show that most part-cut hypotheses can be eliminated by analysis of local properties of each. Finally, the remaining hypotheses are evaluated in ascending length order, which guarantees that of any pair of conflicting cuts only the shortest will be accepted. We demonstrate that, compared with state-of-the-art shape decomposition methods, the proposed approach achieves decomposition results which better correspond to human intuition as revealed in psychological experiments.Comment: 11 page

    EU accession and Poland's external trade policy

    Get PDF
    No description supplie

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model
    • …
    corecore