16,399 research outputs found

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    Dynamic fluctuations coincide with periods of high and low modularity in resting-state functional brain networks

    Full text link
    We investigate the relationship of resting-state fMRI functional connectivity estimated over long periods of time with time-varying functional connectivity estimated over shorter time intervals. We show that using Pearson's correlation to estimate functional connectivity implies that the range of fluctuations of functional connections over short time scales is subject to statistical constraints imposed by their connectivity strength over longer scales. We present a method for estimating time-varying functional connectivity that is designed to mitigate this issue and allows us to identify episodes where functional connections are unexpectedly strong or weak. We apply this method to data recorded from N=80N=80 participants, and show that the number of unexpectedly strong/weak connections fluctuates over time, and that these variations coincide with intermittent periods of high and low modularity in time-varying functional connectivity. We also find that during periods of relative quiescence regions associated with default mode network tend to join communities with attentional, control, and primary sensory systems. In contrast, during periods where many connections are unexpectedly strong/weak, default mode regions dissociate and form distinct modules. Finally, we go on to show that, while all functional connections can at times manifest stronger (more positively correlated) or weaker (more negatively correlated) than expected, a small number of connections, mostly within the visual and somatomotor networks, do so a disproportional number of times. Our statistical approach allows the detection of functional connections that fluctuate more or less than expected based on their long-time averages and may be of use in future studies characterizing the spatio-temporal patterns of time-varying functional connectivityComment: 47 Pages, 8 Figures, 4 Supplementary Figure

    SurveyMan: Programming and Automatically Debugging Surveys

    Full text link
    Surveys can be viewed as programs, complete with logic, control flow, and bugs. Word choice or the order in which questions are asked can unintentionally bias responses. Vague, confusing, or intrusive questions can cause respondents to abandon a survey. Surveys can also have runtime errors: inattentive respondents can taint results. This effect is especially problematic when deploying surveys in uncontrolled settings, such as on the web or via crowdsourcing platforms. Because the results of surveys drive business decisions and inform scientific conclusions, it is crucial to make sure they are correct. We present SurveyMan, a system for designing, deploying, and automatically debugging surveys. Survey authors write their surveys in a lightweight domain-specific language aimed at end users. SurveyMan statically analyzes the survey to provide feedback to survey authors before deployment. It then compiles the survey into JavaScript and deploys it either to the web or a crowdsourcing platform. SurveyMan's dynamic analyses automatically find survey bugs, and control for the quality of responses. We evaluate SurveyMan's algorithms analytically and empirically, demonstrating its effectiveness with case studies of social science surveys conducted via Amazon's Mechanical Turk.Comment: Submitted version; accepted to OOPSLA 201

    Efficient high-dimensional entanglement imaging with a compressive sensing, double-pixel camera

    Get PDF
    We implement a double-pixel, compressive sensing camera to efficiently characterize, at high resolution, the spatially entangled fields produced by spontaneous parametric downconversion. This technique leverages sparsity in spatial correlations between entangled photons to improve acquisition times over raster-scanning by a scaling factor up to n^2/log(n) for n-dimensional images. We image at resolutions up to 1024 dimensions per detector and demonstrate a channel capacity of 8.4 bits per photon. By comparing the classical mutual information in conjugate bases, we violate an entropic Einstein-Podolsky-Rosen separability criterion for all measured resolutions. More broadly, our result indicates compressive sensing can be especially effective for higher-order measurements on correlated systems.Comment: 10 pages, 7 figure

    A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Get PDF
    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data
    corecore