55,615 research outputs found

    Reconciling Distance Functions and Level Sets

    Get PDF
    This paper is concerned with the simulation of the Partial Differential Equation (PDE) driven evolution of a closed surface by means of an implicit representation. In most applications, the natural choice for the implicit representation is the signed distance function to the closed surface. Osher and Sethian propose to evolve the distance function with a Hamilton-Jaco- bi equation. Unfortunately the solution to this equation is not a distance function. As a consequence, the practical application of the level set method is plagued with such questions as when do we have to "reinitialize" the distance function? How do we "reinitialize" the distance function? Etc... which reveal a disagreement between the theory and its implementation. This paper proposes an alternative to the use of Hamilton-Jacobi equations which eliminates this contradiction: in our method the implicit representation always remains a distance function by construction, and the implementation does not differ from the theory anymore. This is achieved through the introduction of a new equation. Besides its theoretical advantages, the proposed method also has several practical advantages which we demonstrate in three applications: (i) the segmentation of the human cortex surfaces from MRI images using two coupled surfaces [26], (ii) the construction of a hierarchy of Euclidean skeletons of a 3D surface, (iii) the reconstructio- n of the surface of 3D objects through stereo [12]

    Reconciling Graphs and Sets of Sets

    Full text link
    We explore a generalization of set reconciliation, where the goal is to reconcile sets of sets. Alice and Bob each have a parent set consisting of ss child sets, each containing at most hh elements from a universe of size uu. They want to reconcile their sets of sets in a scenario where the total number of differences between all of their child sets (under the minimum difference matching between their child sets) is dd. We give several algorithms for this problem, and discuss applications to reconciliation problems on graphs, databases, and collections of documents. We specifically focus on graph reconciliation, providing protocols based on set of sets reconciliation for random graphs from G(n,p)G(n,p) and for forests of rooted trees

    Supporting the reconciliation of models of object behaviour

    Get PDF
    This paper presents Reconciliation+, a method which identifies overlaps between models of software systems behaviour expressed as UML object interaction diagrams (i.e., sequence and/or collaboration diagrams), checks whether the overlapping elements of these models satisfy specific consistency rules and, in cases where they violate these rules, guides software designers in handling the detected inconsistencies. The method detects overlaps between object interaction diagrams by using a probabilistic message matching algorithm that has been developed for this purpose. The guidance to software designers on when to check for inconsistencies and how to deal with them is delivered by enacting a built-in process model that specifies the consistency rules that can be checked against overlapping models and different ways of handling violations of these rules. Reconciliation+ is supported by a toolkit. It has also been evaluated in a case study. This case study has produced positive results which are discussed in the paper

    Double longitudinal-spin asymmetries in J/ψJ/\psi production at RHIC

    Full text link
    The double longitudinal-spin asymmetry, ALLA_{LL}, of the J/ψJ/\psi production in polarized proton-proton collisions is presented in this paper at QCD next-to-leading order. It is found that the obtained values of ALLA_{LL} are in general consistent with the PHENIX measurements. Various sets of the long-distance matrix elements (LDMEs) are employed in our calculation to study the possible theoretical uncertainties. It is found that, for p_t<5\gev, all these LDMEs lead to almost the same results, which are within the tolerance of the experimental data uncertainties

    Reconciling Synthesis and Decomposition: A Composite Approach to Capability Identification

    Full text link
    Stakeholders' expectations and technology constantly evolve during the lengthy development cycles of a large-scale computer based system. Consequently, the traditional approach of baselining requirements results in an unsatisfactory system because it is ill-equipped to accommodate such change. In contrast, systems constructed on the basis of Capabilities are more change-tolerant; Capabilities are functional abstractions that are neither as amorphous as user needs nor as rigid as system requirements. Alternatively, Capabilities are aggregates that capture desired functionality from the users' needs, and are designed to exhibit desirable software engineering characteristics of high cohesion, low coupling and optimum abstraction levels. To formulate these functional abstractions we develop and investigate two algorithms for Capability identification: Synthesis and Decomposition. The synthesis algorithm aggregates detailed rudimentary elements of the system to form Capabilities. In contrast, the decomposition algorithm determines Capabilities by recursively partitioning the overall mission of the system into more detailed entities. Empirical analysis on a small computer based library system reveals that neither approach is sufficient by itself. However, a composite algorithm based on a complementary approach reconciling the two polar perspectives results in a more feasible set of Capabilities. In particular, the composite algorithm formulates Capabilities using the cohesion and coupling measures as defined by the decomposition algorithm and the abstraction level as determined by the synthesis algorithm.Comment: This paper appears in the 14th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS); 10 pages, 9 figure
    • …
    corecore