3,237 research outputs found

    Simplified Planner Selection

    Get PDF
    There exists no planning algorithm that outperforms all oth- ers. Therefore, it is important to know which algorithm works well on a task. A recently published approach uses either im- age or graph convolutional neural networks to solve this prob- lem and achieves top performance. Especially the transforma- tion from the task to an image ignores a lot of information. Thus, we would like to know what the network is learning and if this is reasonable. As this is currently not possible, we take one step back. We identify a small set of simple graph features and show that elementary and interpretable machine learning techniques can use those features to outperform the neural network based approach. Furthermore, we evaluate the importance of those features and verify that the performance of our approach is robust to changes in the training and test data

    Merge-and-shrink abstractions for classical planning : theory, strategies, and implementation

    Get PDF
    Classical planning is the problem of finding a sequence of deterministic actions in a state space that lead from an initial state to a state satisfying some goal condition. The dominant approach to optimally solve planning tasks is heuristic search, in particular A* search combined with an admissible heuristic. While there exist many different admissible heuristics, we focus on abstraction heuristics in this thesis, and in particular, on the well-established merge-and-shrink heuristics. Our main theoretical contribution is to provide a comprehensive description of the merge-and-shrink framework in terms of transformations of transition systems. Unlike previous accounts, our description is fully compositional, i.e. can be understood by understanding each transformation in isolation. In particular, in addition to the name-giving merge and shrink transformations, we also describe pruning and label reduction as such transformations. The latter is based on generalized label reduction, a new theory that removes all of the restrictions of the previous definition of label reduction. We study the four types of transformations in terms of desirable formal properties and explain how these properties transfer to heuristics being admissible and consistent or even perfect. We also describe an optimized implementation of the merge-and-shrink framework that substantially improves the efficiency compared to previous implementations. Furthermore, we investigate the expressive power of merge-and-shrink abstractions by analyzing factored mappings, the data structure they use for representing functions. In particular, we show that there exist certain families of functions that can be compactly represented by so-called non-linear factored mappings but not by linear ones. On the practical side, we contribute several non-linear merge strategies to the merge-and-shrink toolbox. In particular, we adapt a merge strategy from model checking to planning, provide a framework to enhance existing merge strategies based on symmetries, devise a simple score-based merge strategy that minimizes the maximum size of transition systems of the merge-and-shrink computation, and describe another framework to enhance merge strategies based on an analysis of causal dependencies of the planning task. In a large experimental study, we show the evolution of the performance of merge-and-shrink heuristics on planning benchmarks. Starting with the state of the art before the contributions of this thesis, we subsequently evaluate all of our techniques and show that state-of-the-art non-linear merge-and-shrink heuristics improve significantly over the previous state of the art

    Simulations of Kinetic Electrostatic Electron Nonlinear (KEEN) Waves with Variable Velocity Resolution Grids and High-Order Time-Splitting

    Get PDF
    KEEN waves are nonlinear, non-stationary, self-organized asymptotic states in Vlasov plasmas outside the scope or purview of linear theory constructs such as electron plasma waves or ion acoustic waves. Nonlinear stationary mode theories such as those leading to BGK modes also do not apply. The range in velocity that is strongly perturbed by KEEN waves depends on the amplitude and duration of the ponderomotive force used to drive them. Smaller amplitude drives create highly localized structures attempting to coalesce into KEEN waves. These cases have much more chaotic and intricate time histories than strongly driven ones. The narrow range in which one must maintain adequate velocity resolution in the weakly driven cases challenges xed grid numerical schemes. What is missing there is the capability of resolving locally in velocity while maintaining a coarse grid outside the highly perturbed region of phase space. We here report on a new Semi-Lagrangian Vlasov-Poisson solver based on conservative non-uniform cubic splines in velocity that tackles this problem head on. An additional feature of our approach is the use of a new high-order time-splitting scheme which allows much longer simulations per computational e ort. This is needed for low amplitude runs which take a long time to set up KEEN waves, if they are able to do so at all. The new code's performance is compared to uniform grid simulations and the advantages quanti ed. The birth pains associated with KEEN waves which are weakly driven is captured in these simulations. These techniques allow the e cient simulation of KEEN waves in multiple dimensions which will be tackled next as well as generalizations to Vlasov-Maxwell codes which are essential to understanding the impact of KEEN waves in practice

    An Interactive Deformable Model Segmentation Algorithm Driven by Morphological Dilations and Erosions Constrained by an Exclusion Band

    Get PDF
    This study introduces an interactive image segmentation algorithm for extraction of ill-defined edges (faint, blurred or partially broken) often observed at small-scale imaging. It is based on a simplified deformable elastic model evolution paradigm. Segmentation is achieved as a two-step region-growing, shrinking and merging simulation constrained by an exclusion band built around the edges of the regions of interest, defined from a variation image. The simulation starts from a set of unlabeled markers and the respective elastic models. During the first step, model evolution occurs entirely outside the exclusion band, driven by alternate action-reaction movements. Forward and backward movements are performed by constrained binary morphological dilations and erosions. Constraints allow controlling how far models can move through narrow gaps. At the end of the first step, models remaining from merging operations receive unique and exclusive labels. On the second and final step, models expansion occurs entirely inside the exclusion band, now driven only by binary unconstrained morphological dilations. A point where two labeled models get into contact defines an edge point. The simulation goes on until the concurrent expansion of all models comes to a complete stop. At this point, the edges of the regions-of-interest have been extracted. Interactivity introduces the possibility to correct small imperfections in the edge positioning by changing a parameter controlling action-reaction or by changing marker's size, position and shape. Slightly inspired by traditional approaches as PDE Level-Set based curve evolution and Immersion Simulation, the algorithm presents a solution to the problem of "synchronizing the concurrent evolution of a large number of models" and an "automatic stopping criterion" for the front propagation. Integer arithmetic implementation assures linear execution time. The results obtained for real applications show that even ill-defined edges can be located with the desired accuracy, thanks to algorithm features and to the interactivity exerted by the user during the segmentation procedure

    Exploratory Visualization of Data Pattern Changes in Multivariate Data Streams

    Get PDF
    More and more researchers are focusing on the management, querying and pattern mining of streaming data. The visualization of streaming data, however, is still a very new topic. Streaming data is very similar to time-series data since each datapoint has a time dimension. Although the latter has been well studied in the area of information visualization, a key characteristic of streaming data, unbounded and large-scale input, is rarely investigated. Moreover, most techniques for visualizing time-series data focus on univariate data and seldom convey multidimensional relationships, which is an important requirement in many application areas. Therefore, it is necessary to develop appropriate techniques for streaming data instead of directly applying time-series visualization techniques to it. As one of the main contributions of this dissertation, I introduce a user-driven approach for the visual analytics of multivariate data streams based on effective visualizations via a combination of windowing and sampling strategies. To help users identify and track how data patterns change over time, not only the current sliding window content but also abstractions of past data in which users are interested are displayed. Sampling is applied within each single time window to help reduce visual clutter as well as preserve data patterns. Sampling ratios scheduled for different windows reflect the degree of user interest in the content. A degree of interest (DOI) function is used to represent a user\u27s interest in different windows of the data. Users can apply two types of pre-defined DOI functions, namely RC (recent change) and PP (periodic phenomena) functions. The developed tool also allows users to interactively adjust DOI functions, in a manner similar to transfer functions in volume visualization, to enable a trial-and-error exploration process. In order to visually convey the change of multidimensional correlations, four layout strategies were designed. User studies showed that three of these are effective techniques for conveying data pattern changes compared to traditional time-series data visualization techniques. Based on this evaluation, a guide for the selection of appropriate layout strategies was derived, considering the characteristics of the targeted datasets and data analysis tasks. Case studies were used to show the effectiveness of DOI functions and the various visualization techniques. A second contribution of this dissertation is a data-driven framework to merge and thus condense time windows having small or no changes and distort the time axis. Only significant changes are shown to users. Pattern vectors are introduced as a compact format for representing the discovered data model. Three views, juxtaposed views, pattern vector views, and pattern change views, were developed for conveying data pattern changes. The first shows more details of the data but needs more canvas space; the last two need much less canvas space via conveying only the pattern parameters, but lose many data details. The experiments showed that the proposed merge algorithms preserves more change information than an intuitive pattern-blind averaging. A user study was also conducted to confirm that the proposed techniques can help users find pattern changes more quickly than via a non-distorted time axis. A third contribution of this dissertation is the history views with related interaction techniques were developed to work under two modes: non-merge and merge. In the former mode, the framework can use natural hierarchical time units or one defined by domain experts to represent timelines. This can help users navigate across long time periods. Grid or virtual calendar views were designed to provide a compact overview for the history data. In addition, MDS pattern starfields, distance maps, and pattern brushes were developed to enable users to quickly investigate the degree of pattern similarity among different time periods. For the merge mode, merge algorithms were applied to selected time windows to generate a merge-based hierarchy. The contiguous time windows having similar patterns are merged first. Users can choose different levels of merging with the tradeoff between more details in the data and less visual clutter in the visualizations. The usability evaluation demonstrated that most participants could understand the concepts of the history views correctly and finished assigned tasks with a high accuracy and relatively fast response time

    Analysis of cardiac arrhythmia sources using Feynman diagrams

    Full text link
    The contraction of the heart muscle is triggered by self-organizing electrical patterns. Abnormalities in these patterns lead to cardiac arrhythmias, a prominent cause of mortality worldwide. The targeted treatment or prevention of arrhythmias requires a thorough understanding of the interacting wavelets, vortices and conduction block sites within the excitation pattern. Currently, there is no conceptual framework that covers the elementary processes during arrhythmogenesis in detail, in particular the transient pivoting patterns observed in patients, which can be interleaved with periods of less fragmented waves. Here, we provide such a framework in terms of quasiparticles and Feynman diagrams, which were originally developed in theoretical physics. We identified three different quasiparticles in excitation patterns: heads, tails and pivots. In simulations and experiments, we show that these basic building blocks can combine into at least four different bound states. By representing their interactions as Feynman diagrams, the creation and annihilation of rotor pairs are shown to be sequences of dynamical creation, annihilation and recombination of the identified quasiparticles. Our results provide a new theoretical foundation for a more detailed theory, analysis and mechanistic insights of topological transitions in excitation patterns, to be applied within and beyond the context of cardiac electrophysiology

    Adaptive Re-Segmentation Strategies for Accurate Bright Field Cell Tracking

    Get PDF
    Understanding complex interactions in cellular systems requires accurate tracking of individual cells observed in microscopic image sequence and acquired from multi-day in vitro experiments. To be effective, methods must follow each cell through the whole experimental sequence to recognize significant phenotypic transitions, such as mitosis, chemotaxis, apoptosis, and cell/cell interactions, and to detect the effect of cell treatments. However, high accuracy long-range cell tracking is difficult because the collection and detection of cells in images is error-prone, and single error in a one frame can cause a tracked cell to be lost. Detection of cells is especially difficult when using bright field microscopy images wherein the contrast difference between the cells and the background is very low. This work introduces a new method that automatically identifies and then corrects tracking errors using a combination of combinatorial registration, flow constraints, and image segmentation repair
    corecore