11,291 research outputs found

    Coupling the time-warp algorithm with the graph-theoretical kinetic Monte Carlo framework for distributed simulations of heterogeneous catalysts

    Get PDF
    Despite the successful and ever widening adoption of kinetic Monte Carlo (KMC) simulations in the area of surface science and heterogeneous catalysis, the accessible length scales are still limited by the inherently sequential nature of the KMC framework. Simulating long-range surface phenomena, such as catalytic reconstruction and pattern formation, requires consideration of large surfaces/lattices, at the μm scale and beyond. However, handling such lattices with the sequential KMC framework is extremely challenging due to the heavy memory footprint and computational demand. The Time-Warp algorithm proposed by Jefferson [ACM. Trans. Program. Lang. Syst., 1985. 7: 404-425] offers a way to enable distributed parallelization of discrete event simulations. Thus, to enable high-fidelity simulations of challenging systems in heterogeneous catalysis, we have coupled the Time-Warp algorithm with the Graph-Theoretical KMC framework [J. Chem. Phys., 134(21): 214115; J. Chem. Phys., 139(22): 224706] and implemented the approach in the general-purpose KMC code Zacros. We have further developed a “parallel-emulation” serial algorithm, which produces identical results to those obtained from the distributed runs (with the Time-Warp algorithm) thereby validating the correctness of our implementation. These advancements make Zacros the first-of-its-kind general-purpose KMC code with distributed computing capabilities, thereby opening up opportunities for detailed meso-scale studies of heterogeneous catalysts and closer-than-ever comparisons of theory with experiments

    A Power Cap Oriented Time Warp Architecture

    Get PDF
    Controlling power usage has become a core objective in modern computing platforms. In this article we present an innovative Time Warp architecture oriented to efficiently run parallel simulations under a power cap. Our architectural organization considers power usage as a foundational design principle, as opposed to classical power-unaware Time Warp design. We provide early experimental results showing the potential of our proposal

    Event-Based Motion Segmentation by Motion Compensation

    Full text link
    In contrast to traditional cameras, whose pixels have a common exposure time, event-based cameras are novel bio-inspired sensors whose pixels work independently and asynchronously output intensity changes (called "events"), with microsecond resolution. Since events are caused by the apparent motion of objects, event-based cameras sample visual information based on the scene dynamics and are, therefore, a more natural fit than traditional cameras to acquire motion, especially at high speeds, where traditional cameras suffer from motion blur. However, distinguishing between events caused by different moving objects and by the camera's ego-motion is a challenging task. We present the first per-event segmentation method for splitting a scene into independently moving objects. Our method jointly estimates the event-object associations (i.e., segmentation) and the motion parameters of the objects (or the background) by maximization of an objective function, which builds upon recent results on event-based motion-compensation. We provide a thorough evaluation of our method on a public dataset, outperforming the state-of-the-art by as much as 10%. We also show the first quantitative evaluation of a segmentation algorithm for event cameras, yielding around 90% accuracy at 4 pixels relative displacement.Comment: When viewed in Acrobat Reader, several of the figures animate. Video: https://youtu.be/0q6ap_OSBA

    Scalable virtual viewpoint image synthesis for multiple camera environments

    Get PDF
    One of the main aims of emerging audio-visual (AV) applications is to provide interactive navigation within a captured event or scene. This paper presents a view synthesis algorithm that provides a scalable and flexible approach to virtual viewpoint synthesis in multiple camera environments. The multi-view synthesis (MVS) process consists of four different phases that are described in detail: surface identification, surface selection, surface boundary blending and surface reconstruction. MVS view synthesis identifies and selects only the best quality surface areas from the set of available reference images, thereby reducing perceptual errors in virtual view reconstruction. The approach is camera setup independent and scalable as virtual views can be created given 1 to N of the available video inputs. Thus, MVS provides interactive AV applications with a means to handle scenarios where camera inputs increase or decrease over time

    The First Provenance Challenge

    No full text
    The first Provenance Challenge was set up in order to provide a forum for the community to help understand the capabilities of different provenance systems and the expressiveness of their provenance representations. To this end, a Functional Magnetic Resonance Imaging workflow was defined, which participants had to either simulate or run in order to produce some provenance representation, from which a set of identified queries had to be implemented and executed. Sixteen teams responded to the challenge, and submitted their inputs. In this paper, we present the challenge workflow and queries, and summarise the participants contributions
    corecore