553 research outputs found

    Illuminating spindle convex bodies and minimizing the volume of spherical sets of constant width

    Full text link
    A subset of the d-dimensional Euclidean space having nonempty interior is called a spindle convex body if it is the intersection of (finitely or infinitely many) congruent d-dimensional closed balls. The spindle convex body is called a "fat" one, if it contains the centers of its generating balls. The core part of this paper is an extension of Schramm's theorem and its proof on illuminating convex bodies of constant width to the family of "fat" spindle convex bodies.Comment: 17 page

    Opaque Service Virtualisation: A Practical Tool for Emulating Endpoint Systems

    Full text link
    Large enterprise software systems make many complex interactions with other services in their environment. Developing and testing for production-like conditions is therefore a very challenging task. Current approaches include emulation of dependent services using either explicit modelling or record-and-replay approaches. Models require deep knowledge of the target services while record-and-replay is limited in accuracy. Both face developmental and scaling issues. We present a new technique that improves the accuracy of record-and-replay approaches, without requiring prior knowledge of the service protocols. The approach uses Multiple Sequence Alignment to derive message prototypes from recorded system interactions and a scheme to match incoming request messages against prototypes to generate response messages. We use a modified Needleman-Wunsch algorithm for distance calculation during message matching. Our approach has shown greater than 99% accuracy for four evaluated enterprise system messaging protocols. The approach has been successfully integrated into the CA Service Virtualization commercial product to complement its existing techniques.Comment: In Proceedings of the 38th International Conference on Software Engineering Companion (pp. 202-211). arXiv admin note: text overlap with arXiv:1510.0142

    Noise-robust method for image segmentation

    Get PDF
    Segmentation of noisy images is one of the most challenging problems in image analysis and any improvement of segmentation methods can highly influence the performance of many image processing applications. In automated image segmentation, the fuzzy c-means (FCM) clustering has been widely used because of its ability to model uncertainty within the data, applicability to multi-modal data and fairly robust behaviour. However, the standard FCM algorithm does not consider any information about the spatial linage context and is highly sensitive to noise and other imaging artefacts. Considering above mentioned problems, we developed a new FCM-based approach for the noise-robust fuzzy clustering and we present it in this paper. In this new iterative algorithm we incorporated both spatial and feature space information into the similarity measure and the membership function. We considered that spatial information depends on the relative location and features of the neighbouring pixels. The performance of the proposed algorithm is tested on synthetic image with different noise levels and real images. Experimental quantitative and qualitative segmentation results show that our method efficiently preserves the homogeneity of the regions and is more robust to noise than other FCM-based methods

    Pixel and Voxel Representations of Graphs

    Full text link
    We study contact representations for graphs, which we call pixel representations in 2D and voxel representations in 3D. Our representations are based on the unit square grid whose cells we call pixels in 2D and voxels in 3D. Two pixels are adjacent if they share an edge, two voxels if they share a face. We call a connected set of pixels or voxels a blob. Given a graph, we represent its vertices by disjoint blobs such that two blobs contain adjacent pixels or voxels if and only if the corresponding vertices are adjacent. We are interested in the size of a representation, which is the number of pixels or voxels it consists of. We first show that finding minimum-size representations is NP-complete. Then, we bound representation sizes needed for certain graph classes. In 2D, we show that, for kk-outerplanar graphs with nn vertices, Θ(kn)\Theta(kn) pixels are always sufficient and sometimes necessary. In particular, outerplanar graphs can be represented with a linear number of pixels, whereas general planar graphs sometimes need a quadratic number. In 3D, Θ(n2)\Theta(n^2) voxels are always sufficient and sometimes necessary for any nn-vertex graph. We improve this bound to Θ(nτ)\Theta(n\cdot \tau) for graphs of treewidth τ\tau and to O((g+1)2nlog2n)O((g+1)^2n\log^2n) for graphs of genus gg. In particular, planar graphs admit representations with O(nlog2n)O(n\log^2n) voxels

    Considerations about multistep community detection

    Full text link
    The problem and implications of community detection in networks have raised a huge attention, for its important applications in both natural and social sciences. A number of algorithms has been developed to solve this problem, addressing either speed optimization or the quality of the partitions calculated. In this paper we propose a multi-step procedure bridging the fastest, but less accurate algorithms (coarse clustering), with the slowest, most effective ones (refinement). By adopting heuristic ranking of the nodes, and classifying a fraction of them as `critical', a refinement step can be restricted to this subset of the network, thus saving computational time. Preliminary numerical results are discussed, showing improvement of the final partition.Comment: 12 page

    Covering convex bodies by cylinders and lattice points by flats

    Full text link
    In connection with an unsolved problem of Bang (1951) we give a lower bound for the sum of the base volumes of cylinders covering a d-dimensional convex body in terms of the relevant basic measures of the given convex body. As an application we establish lower bounds on the number of k-dimensional flats (i.e. translates of k-dimensional linear subspaces) needed to cover all the integer points of a given convex body in d-dimensional Euclidean space for 0<k<d

    On Optimizing Locally Linear Nearest Neighbour Reconstructions Using Prototype Reduction Schemes

    Get PDF
    This paper concerns the use of Prototype Reduction Schemes (PRS) to optimize the computations involved in typical k-Nearest Neighbor (k-NN) rules. These rules have been successfully used for decades in statistical Pattern Recognition (PR) applications, and have numerous applications because of their known error bounds. For a given data point of unknown identity, the k-NN possesses the phenomenon that it combines the information about the samples from a priori target classes (values) of selected neighbors to, for example, predict the target class of the tested sample. Recently, an implementation of the k-NN, named as the Locally Linear Reconstruction (LLR) [11], has been proposed. The salient feature of the latter is that by invoking a quadratic optimization process, it is capable of systematically setting model parameters, such as the number of neighbors (specified by the parameter, k) and the weights. However, the LLR takes more time than other conventional methods when it has to be applied to classification tasks. To overcome this problem, we propose a strategy of using a PRS to efficiently compute the optimization problem. In this paper, we demonstrate, first of all, that by completely discarding the points not included by the PRS, we can obtain a reduced set of sample points, using which, in turn, the quadratic optimization problem can be computed far more expediently. The values of the corresponding indices are comparable to those obtained with the original training set (i.e., the one which considers all the data points) even though the computations required to obtain the prototypes and the corresponding classification accuracies are noticeably less. The proposed method has been tested on artificial and real-life data sets, and the results obtained are very promising, and has potential in PR applications

    Sybil tolerance and probabilistic databases to compute web services trust

    Get PDF
    © Springer International Publishing Switzerland 2015. This paper discusses how Sybil attacks can undermine trust management systems and how to respond to these attacks using advanced techniques such as credibility and probabilistic databases. In such attacks end-users have purposely different identities and hence, can provide inconsistent ratings over the same Web Services. Many existing approaches rely on arbitrary choices to filter out Sybil users and reduce their attack capabilities. However this turns out inefficient. Our approach relies on non-Sybil credible users who provide consistent ratings over Web services and hence, can be trusted. To establish these ratings and debunk Sybil users techniques such as fuzzy-clustering, graph search, and probabilistic databases are adopted. A series of experiments are carried out to demonstrate robustness of our trust approach in presence of Sybil attacks

    Basic Understanding of Condensed Phases of Matter via Packing Models

    Full text link
    Packing problems have been a source of fascination for millenia and their study has produced a rich literature that spans numerous disciplines. Investigations of hard-particle packing models have provided basic insights into the structure and bulk properties of condensed phases of matter, including low-temperature states (e.g., molecular and colloidal liquids, crystals and glasses), multiphase heterogeneous media, granular media, and biological systems. The densest packings are of great interest in pure mathematics, including discrete geometry and number theory. This perspective reviews pertinent theoretical and computational literature concerning the equilibrium, metastable and nonequilibrium packings of hard-particle packings in various Euclidean space dimensions. In the case of jammed packings, emphasis will be placed on the "geometric-structure" approach, which provides a powerful and unified means to quantitatively characterize individual packings via jamming categories and "order" maps. It incorporates extremal jammed states, including the densest packings, maximally random jammed states, and lowest-density jammed structures. Packings of identical spheres, spheres with a size distribution, and nonspherical particles are also surveyed. We close this review by identifying challenges and open questions for future research.Comment: 33 pages, 20 figures, Invited "Perspective" submitted to the Journal of Chemical Physics. arXiv admin note: text overlap with arXiv:1008.298

    On the Densest Packing of Polycylinders in Any Dimension

    Get PDF
    corecore