317,585 research outputs found

    Parallel centerline extraction on the GPU

    Get PDF
    Centerline extraction is important in a variety of visualization applications including shape analysis, geometry processing, and virtual endoscopy. Centerlines allow accurate measurements of length along winding tubular structures, assist automatic virtual navigation, and provide a path-planning system to control the movement and orientation of a virtual camera. However, efficiently computing centerlines with the desired accuracy has been a major challenge. Existing centerline methods are either not fast enough or not accurate enough for interactive application to complex 3D shapes. Some methods based on distance mapping are accurate, but these are sequential algorithms which have limited performance when running on the CPU. To our knowledge, there is no accurate parallel centerline algorithm that can take advantage of modern many-core parallel computing resources, such as GPUs, to perform automatic centerline extraction from large data volumes at interactive speed and with high accuracy. In this paper, we present a new parallel centerline extraction algorithm suitable for implementation on a GPU to produce highly accurate, 26-connected, one-voxel-thick centerlines at interactive speed. The resulting centerlines are as accurate as those produced by a state-of-the-art sequential CPU method [40], while being computed hundreds of times faster. Applications to fly through path planning and virtual endoscopy are discussed. Experimental results demonstrating centeredness, robustness and efficiency are presented

    Optimal Reissue Policies for Reducing Tail Latency

    Get PDF
    Interactive services send redundant requests to multiple different replicas to meet stringent tail latency requirements. These addi- tional (reissue) requests mitigate the impact of non-deterministic delays within the system and thus increase the probability of re- ceiving an on-time response. There are two existing approaches of using reissue requests to reduce tail latency. (1) Reissue requests immediately to one or more replicas, which multiplies the load and runs the risk of overloading the system. (2) Reissue requests if not completed after a fixed delay. The delay helps to bound the number of extra reissue requests, but it also reduces the chance for those requests to respond before a tail latency target. We introduce a new family of reissue policies, Single-Time / Random ( SingleR ), that reissue requests after a delay d with probability q . SingleR employs randomness to bound the reissue rate, while allowing requests to be reissued early enough so they have sufficient time to respond, exploiting the benefits of both immediate and delayed reissue of prior work. We formally prove, within a simplified analytical model, that SingleR is optimal even when compared to more complex policies that reissue multiple times. To use SingleR for interactive services, we provide efficient algorithms for calculating optimal reissue delay and probability from response time logs through data-driven approach. We apply itera- tive adaptation for systems with load-dependent queuing delays. The key advantage of this data-driven approach is its wide applica- bility and effectiveness to systems with various design choices and workload properties. We evaluated SingleR policies thoroughly. We use simulation to illustrate its internals and demonstrate its robustness to a wide range of workloads. We conduct system experiments on the Re- dis key-value store and Lucene search server. The results show that for utilizations ranging from 40 - 60% , SingleR reduces the 99 th-percentile latency of Redis by 30 - 70% by reissuing only 2% of requests, and the 99 th-percentile latency of Lucene by 15 - 25% by reissuing 1% only

    Beyond ‘Interaction’: How to Understand Social Effects on Social Cognition

    Get PDF
    In recent years, a number of philosophers and cognitive scientists have advocated for an ‘interactive turn’ in the methodology of social-cognition research: to become more ecologically valid, we must design experiments that are interactive, rather than merely observational. While the practical aim of improving ecological validity in the study of social cognition is laudable, we think that the notion of ‘interaction’ is not suitable for this task: as it is currently deployed in the social cognition literature, this notion leads to serious conceptual and methodological confusion. In this paper, we tackle this confusion on three fronts: 1) we revise the ‘interactionist’ definition of interaction; 2) we demonstrate a number of potential methodological confounds that arise in interactive experimental designs; and 3) we show that ersatz interactivity works just as well as the real thing. We conclude that the notion of ‘interaction’, as it is currently being deployed in this literature, obscures an accurate understanding of human social cognition

    Synchronization of Speech and Gesture : Evidence for Interaction in Action

    Get PDF
    Peer reviewedPostprin
    • …
    corecore