208 research outputs found

    Fibonacci Binning

    Full text link
    This note argues that when dot-plotting distributions typically found in papers about web and social networks (degree distributions, component-size distributions, etc.), and more generally distributions that have high variability in their tail, an exponentially binned version should always be plotted, too, and suggests Fibonacci binning as a visually appealing, easy-to-use and practical choice

    Analysis of binning of normals for spherical harmonic cross-correlation

    Get PDF
    Spherical harmonic cross-correlation is a robust registration technique that uses the normals of two overlapping point clouds to bring them into coarse rotational alignment. This registration technique however has a high computational cost as spherical harmonics need to be calculated for every normal. By binning the normals, the computational efficiency is improved as the spherical harmonics can be pre-computed and cached at each bin location. In this paper we evaluate the efficiency and accuracy of the equiangle grid, icosahedron subdivision and the Fibonacci spiral, an approach we propose. It is found that the equiangle grid has the best efficiency as it can perform direct binning, followed by the Fibonacci spiral and then the icosahedron, all of which decrease the computational cost compared to no binning. The Fibonacci spiral produces the highest achieved accuracy of the three approaches while maintaining a low number of bins. The number of bins allowed by the equiangle grid and icosahedron are much more restrictive than the Fibonacci spiral. The performed analysis shows that the Fibonacci spiral can perform as well as the original cross-correlation algorithm without binning, while also providing a significant improvement in computational efficiency

    Analysing and Enhancing the Coarse Registration Pipeline

    Get PDF
    The current and continual development of sensors and imaging systems capable of acquiring three-dimensional data provides a novel form in which the world can be expressed and examined. The acquisition process, however, is often limited by imaging systems only being able to view a portion of a scene or object from a single pose at a given time. A full representation can still be produced by shifting the system and registering subsequent acquisitions together. While many solutions to the registration problem have been proposed, there is no quintessential approach appropriate for all situations. This dissertation aims to coarsely register range images or point-clouds of a priori unknown pose by matching their overlapping regions. Using spherical harmonics to correlate normals in a coarse registration pipeline has been shown previously to be an effective means for registering partially overlapping point-clouds. The advantage of normals is their translation invariance, which permits the rotation and translation to be decoupled and determined separately. Examining each step of this pipeline in depth allows its registration capability to be quantified and identifies aspects which can be enhanced to further improve registration performance. The pipeline consists of three primary steps: identifying the rotation using spherical harmonics, identifying the translation in the Fourier domain, and automatically verifying if alignment is correct. Having achieved coarse registration, a fine registration algorithm can be used to refine and complete the alignment. Major contributions to knowledge are provided by this dissertation at each step of the pipeline. Point-clouds with known ground-truth are used to examine the pipeline's capability, allowing its limitations to be determined; an analysis which has not been performed previously. This examination allowed modifications to individual components to be introduced and measured, establishing their provided benefit. The rotation step received the greatest attention as it is the primary weakness of the pipeline, especially as the nature of the overlap between point-clouds is unknown. Examining three schemes for binning normals found that equiangular binning, when appropriately normalised, only had a marginal decrease in accuracy with respect to the icosahedron and the introduced Fibonacci schemes. Overall, equiangular binning was the most appropriate due to its natural affinity for fast spherical-harmonic conversion. Weighting normals was found to provide the greatest benefit to registration performance. The introduction of a straightforward method of combining two different weighting schemes using the orthogonality of complex values increased correct alignments by approximately 80% with respect to the next best scheme; additionally, point-cloud pairs with overlap as low as 5% were able to be brought into correct alignment. Transform transitivity, one of two introduced verification strategies, correctly classified almost 100% of point-cloud pair registrations when there are sufficient correct alignments. The enhancements made to the coarse registration pipeline throughout this dissertation provide significant improvements to its performance. The result is a pipeline with state-of-the-art capabilities that allow it to register point-cloud with minimal overlap and correct for alignments that are classified as misaligned. Even with its exceptional performance, it is unlikely that this pipeline has yet reached its pinnacle, as the introduced enhancements have the potential for further development

    Analysis of Random Number Generators Using Monte Carlo Simulation

    Get PDF
    Revisions are almost entirely in the introduction and conclusion. Results are unchanged, however the comments and recommendations on different generators were changed, and more references were added.Comment: Email: [email protected] 16 pages, Latex with 1 postscript figure. NPAC technical report SCCS-52

    Malware distributions and graph structure of the Web

    Full text link
    Knowledge about the graph structure of the Web is important for understanding this complex socio-technical system and for devising proper policies supporting its future development. Knowledge about the differences between clean and malicious parts of the Web is important for understanding potential treats to its users and for devising protection mechanisms. In this study, we conduct data science methods on a large crawl of surface and deep Web pages with the aim to increase such knowledge. To accomplish this, we answer the following questions. Which theoretical distributions explain important local characteristics and network properties of websites? How are these characteristics and properties different between clean and malicious (malware-affected) websites? What is the prediction power of local characteristics and network properties to classify malware websites? To the best of our knowledge, this is the first large-scale study describing the differences in global properties between malicious and clean parts of the Web. In other words, our work is building on and bridging the gap between \textit{Web science} that tackles large-scale graph representations and \textit{Web cyber security} that is concerned with malicious activities on the Web. The results presented herein can also help antivirus vendors in devising approaches to improve their detection algorithms
    corecore