300,575 research outputs found

    Multiresolution community detection for megascale networks by information-based replica correlations

    Full text link
    We use a Potts model community detection algorithm to accurately and quantitatively evaluate the hierarchical or multiresolution structure of a graph. Our multiresolution algorithm calculates correlations among multiple copies ("replicas") of the same graph over a range of resolutions. Significant multiresolution structures are identified by strongly correlated replicas. The average normalized mutual information, the variation of information, and other measures in principle give a quantitative estimate of the "best" resolutions and indicate the relative strength of the structures in the graph. Because the method is based on information comparisons, it can in principle be used with any community detection model that can examine multiple resolutions. Our approach may be extended to other optimization problems. As a local measure, our Potts model avoids the "resolution limit" that affects other popular models. With this model, our community detection algorithm has an accuracy that ranks among the best of currently available methods. Using it, we can examine graphs over 40 million nodes and more than one billion edges. We further report that the multiresolution variant of our algorithm can solve systems of at least 200000 nodes and 10 million edges on a single processor with exceptionally high accuracy. For typical cases, we find a super-linear scaling, O(L^{1.3}) for community detection and O(L^{1.3} log N) for the multiresolution algorithm where L is the number of edges and N is the number of nodes in the system.Comment: 19 pages, 14 figures, published version with minor change

    Range Image Segmentation for 3-D Object Recognition

    Get PDF
    Three dimensional scene analysis in an unconstrained and uncontrolled environment is the ultimate goal of computer vision. Explicit depth information about the scene is of tremendous help in segmentation and recognition of objects. Range image interpretation with a view of obtaining low-level features to guide mid-level and high-level segmentation and recognition processes is described. No assumptions about the scene are made and algorithms are applicable to any general single viewpoint range image. Low-level features like step edges and surface characteristics are extracted from the images and segmentation is performed based on individual features as well as combination of features. A high level recognition process based on superquadric fitting is described to demonstrate the usefulness of initial segmentation based on edges. A classification algorithm based on surface curvatures is used to obtain initial segmentation of the scene. Objects segmented using edge information are then classified using surface curvatures. Various applications of surface curvatures in mid and high level recognition processes are discussed. These include surface reconstruction, segmentation into convex patches and detection of smooth edges. Algorithms are run on real range images and results are discussed in detail

    Filtering of Artifacts and Pavement Segmentation from Mobile LiDAR Data

    No full text
    International audienceThis paper presents an automatic method for filtering and segmenting 3D point clouds acquired from mobile LIDAR systems. Our approach exploits 3D information by using range images and several morphological operators. Firstly, a detection of artifacts is carried out in order to filter point clouds. The artifact detection is based on a Top-Hat of hole filling algorithm. Secondly, ground segmentation extracts the contour between pavements and roads. The method uses a quasi-flat zone algorithm and a region adjacency graph representation. Edges are evaluated with the local height difference along the corresponding boundary. Finally, edges with a value compatible with the pavement/road difference (about 14[ cm ] ) are selected. Preliminary results demonstrate the ability of this approach to automatically filter artifacts and segment pavements from 3D data

    Road edge and lane boundary detection using laser and vision

    Full text link
    This paper presents a methodology for extracting road edge and lane information for smart and intelligent navigation of vehicles. The range information provided by a fast laser range-measuring device is processed by an extended Kalman filter to extract the road edge or curb information. The resultant road edge information is used to aid in the extraction of the lane boundary from a CCD camera image. Hough Transform (HT) is used to extract the candidate lane boundary edges, and the most probable lane boundary is determined using an Active Line Model based on minimizing an appropriate Energy function. Experimental results are presented to demonstrate the effectiveness of the combined Laser and Vision strategy for road-edge and lane boundary detection

    A bin picking system based on depth from defocus

    Get PDF
    It is generally accepted that to develop versatile bin-picking systems capable of grasping and manipulation operations, accurate 3-D information is required. To accomplish this goal, we have developed a fast and precise range sensor based on active depth from defocus (DFD). This sensor is used in conjunction with a three-component vision system, which is able to recognize and evaluate the attitude of 3-D objects. The first component performs scene segmentation using an edge-based approach. Since edges are used to detect the object boundaries, a key issue consists of improving the quality of edge detection. The second component attempts to recognize the object placed on the top of the object pile using a model-driven approach in which the segmented surfaces are compared with those stored in the model database. Finally, the attitude of the recognized object is evaluated using an eigenimage approach augmented with range data analysis. The full bin-picking system will be outlined, and a number of experimental results will be examined

    PCA-based line detection from range data for mapping and localization-aiding of UAVs

    Get PDF
    This paper presents an original technique for robust detection of line features from range data, which is also the core element of an algorithm conceived for mapping 2D environments. A new approach is also discussed to improve the accuracy of position and attitude estimates of the localization by feeding back angular information extracted from the detected edges in the updating map. The innovative aspects of the line detection algorithm regard the proposed hierarchical clusterization method for segmentation. Instead, line fitting is carried out by exploiting the Principal Component Analysis, unlike traditional techniques relying on Least Squares linear regression. Numerical simulations are purposely conceived to compare these approaches for line fitting. Results demonstrate the applicability of the proposed technique as it provides comparable performance in terms of computational load and accuracy compared to the least squares method. Also, performance of the overall line detection architecture, as well as of the solutions proposed for line-based mapping and localization-aiding is evaluated exploiting real range data acquired in indoor environments using an UTM-30LX-EW 2D LIDAR. This paper lies in the framework of autonomous navigation of unmanned vehicles moving in complex 2D areas, e.g. unexplored, full of obstacles, GPS-challenging or denied

    BOURNE: Bootstrapped Self-supervised Learning Framework for Unified Graph Anomaly Detection

    Full text link
    Graph anomaly detection (GAD) has gained increasing attention in recent years due to its critical application in a wide range of domains, such as social networks, financial risk management, and traffic analysis. Existing GAD methods can be categorized into node and edge anomaly detection models based on the type of graph objects being detected. However, these methods typically treat node and edge anomalies as separate tasks, overlooking their associations and frequent co-occurrences in real-world graphs. As a result, they fail to leverage the complementary information provided by node and edge anomalies for mutual detection. Additionally, state-of-the-art GAD methods, such as CoLA and SL-GAD, heavily rely on negative pair sampling in contrastive learning, which incurs high computational costs, hindering their scalability to large graphs. To address these limitations, we propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE). We extract a subgraph (graph view) centered on each target node as node context and transform it into a dual hypergraph (hypergraph view) as edge context. These views are encoded using graph and hypergraph neural networks to capture the representations of nodes, edges, and their associated contexts. By swapping the context embeddings between nodes and edges and measuring the agreement in the embedding space, we enable the mutual detection of node and edge anomalies. Furthermore, we adopt a bootstrapped training strategy that eliminates the need for negative sampling, enabling BOURNE to handle large graphs efficiently. Extensive experiments conducted on six benchmark datasets demonstrate the superior effectiveness and efficiency of BOURNE in detecting both node and edge anomalies

    Incremental Dead State Detection in Logarithmic Time

    Full text link
    Identifying live and dead states in an abstract transition system is a recurring problem in formal verification; for example, it arises in our recent work on efficiently deciding regex constraints in SMT. However, state-of-the-art graph algorithms for maintaining reachability information incrementally (that is, as states are visited and before the entire state space is explored) assume that new edges can be added from any state at any time, whereas in many applications, outgoing edges are added from each state as it is explored. To formalize the latter situation, we propose guided incremental digraphs (GIDs), incremental graphs which support labeling closed states (states which will not receive further outgoing edges). Our main result is that dead state detection in GIDs is solvable in O(logm)O(\log m) amortized time per edge for mm edges, improving upon O(m)O(\sqrt{m}) per edge due to Bender, Fineman, Gilbert, and Tarjan (BFGT) for general incremental directed graphs. We introduce two algorithms for GIDs: one establishing the logarithmic time bound, and a second algorithm to explore a lazy heuristics-based approach. To enable an apples-to-apples experimental comparison, we implemented both algorithms, two simpler baselines, and the state-of-the-art BFGT baseline using a common directed graph interface in Rust. Our evaluation shows 110110-530530x speedups over BFGT for the largest input graphs over a range of graph classes, random graphs, and graphs arising from regex benchmarks.Comment: 22 pages + reference

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground
    corecore