2,740 research outputs found

    The use of facility dogs to bridge the justice gap for survivors of sexual offending

    Get PDF
    The current study investigated the support that a facility dog can provide to survivors of sexual crimes when undergoing video-recorded police interviews. In total, 13 survivors of sexual offences, who were undergoing a video-recorded interview, were provided with a facility dog for the interview process. For each case, data were collected via interviews, observations and surveys. Using a multiple case study approach, qualitative data were analysed to identify patterns, with observational and survey data used to provide further support to these outcomes. A total of four main themes emerged from the data: (1) a change in focus for the survivor, (2) a difference in the survivors’ engagement, (3) the dog as a comforter to keep the survivor calm and (4) a positive environment. Overall, the findings suggest that the facility dog provided a much needed and beneficial service to survivors, helping them feel calmer and more comfortable. The dog also provided survivors with a more positive environment, allowing them to focus on the interview and communicate more openly about their experiences. The current study, therefore, presents very positive findings relating to improving survivors’ perspectives of justice within the framework of kaleidoscopic justice, bridging their perceived justice gap

    Histopathologic alterations associated with the transplanted homologous dog liver

    Get PDF
    Homotransplanted livers in dogs developed mononuclear, lymphocytic and plasmacytic infiltration and hepatic cell degeneration roughly paralleling survival time. Extensive histologic alterations of host reticuloendothelial structures occurred. Proliferation and infiltration of mononuclear cells, principally plasmacytes, were noted in lung, kidney, perirenal supportive tissue, bone marrow, and lymph nodes. Lymph nodes, in addition, were characterized by cortical and follicular depletion. These changes were considered to represent extensive host reticuloendothelial mobilization coincident to liver homotransplant rejection. The relation between these alterations and those found in other hypersensitivity states is discussed. © 1962

    Statistical significance of communities in networks

    Full text link
    Nodes in real-world networks are usually organized in local modules. These groups, called communities, are intuitively defined as sub-graphs with a larger density of internal connections than of external links. In this work, we introduce a new measure aimed at quantifying the statistical significance of single communities. Extreme and Order Statistics are used to predict the statistics associated with individual clusters in random graphs. These distributions allows us to define one community significance as the probability that a generic clustering algorithm finds such a group in a random graph. The method is successfully applied in the case of real-world networks for the evaluation of the significance of their communities.Comment: 9 pages, 8 figures, 2 tables. The software to calculate the C-score can be found at http://filrad.homelinux.org/cscor

    Replicators in Fine-grained Environment: Adaptation and Polymorphism

    Full text link
    Selection in a time-periodic environment is modeled via the two-player replicator dynamics. For sufficiently fast environmental changes, this is reduced to a multi-player replicator dynamics in a constant environment. The two-player terms correspond to the time-averaged payoffs, while the three and four-player terms arise from the adaptation of the morphs to their varying environment. Such multi-player (adaptive) terms can induce a stable polymorphism. The establishment of the polymorphism in partnership games [genetic selection] is accompanied by decreasing mean fitness of the population.Comment: 4 pages, 2 figure

    Statistical Mechanics of Learning in the Presence of Outliers

    Full text link
    Using methods of statistical mechanics, we analyse the effect of outliers on the supervised learning of a classification problem. The learning strategy aims at selecting informative examples and discarding outliers. We compare two algorithms which perform the selection either in a soft or a hard way. When the fraction of outliers grows large, the estimation errors undergo a first order phase transition.Comment: 24 pages, 7 figures (minor extensions added

    MACOC: a medoid-based ACO clustering algorithm

    Get PDF
    The application of ACO-based algorithms in data mining is growing over the last few years and several supervised and unsupervised learning algorithms have been developed using this bio-inspired approach. Most recent works concerning unsupervised learning have been focused on clustering, showing great potential of ACO-based techniques. This work presents an ACO-based clustering algorithm inspired by the ACO Clustering (ACOC) algorithm. The proposed approach restructures ACOC from a centroid-based technique to a medoid-based technique, where the properties of the search space are not necessarily known. Instead, it only relies on the information about the distances amongst data. The new algorithm, called MACOC, has been compared against well-known algorithms (K-means and Partition Around Medoids) and with ACOC. The experiments measure the accuracy of the algorithm for both synthetic datasets and real-world datasets extracted from the UCI Machine Learning Repository

    Analysis of Fourier transform valuation formulas and applications

    Full text link
    The aim of this article is to provide a systematic analysis of the conditions such that Fourier transform valuation formulas are valid in a general framework; i.e. when the option has an arbitrary payoff function and depends on the path of the asset price process. An interplay between the conditions on the payoff function and the process arises naturally. We also extend these results to the multi-dimensional case, and discuss the calculation of Greeks by Fourier transform methods. As an application, we price options on the minimum of two assets in L\'evy and stochastic volatility models.Comment: 26 pages, 3 figures, to appear in Appl. Math. Financ

    A Discriminative Model of Stochastic Edit Distance in the form of a Conditional Transducer

    No full text
    pages 240-252International audienceMany real-world applications such as spell-checking or DNA analysis use the Levenshtein edit-distance to compute similarities between strings. In practice, the costs of the primitive edit operations (insertion, deletion and substitution of symbols) are generally hand-tuned. In this paper, we propose an algorithm to learn these costs. The underlying model is a probabilitic transducer, computed by using grammatical inference techniques, that allows us to learn both the structure and the probabilities of the model. Beyond the fact that the learned transducers are neither deterministic nor stochastic in the standard terminology, they are conditional, thus independant from the distributions of the input strings. Finally, we show through experiments that our method allows us to design cost functions that depend on the string context where the edit operations are used. In other words, we get kinds of \textit{context-sensitive} edit distances

    A population-based approach to background discrimination in particle physics

    Full text link
    Background properties in experimental particle physics are typically estimated using control samples corresponding to large numbers of events. This can provide precise knowledge of average background distributions, but typically does not consider the effect of fluctuations in a data set of interest. A novel approach based on mixture model decomposition is presented as a way to estimate the effect of fluctuations on the shapes of probability distributions in a given data set, with a view to improving on the knowledge of background distributions obtained from control samples. Events are treated as heterogeneous populations comprising particles originating from different processes, and individual particles are mapped to a process of interest on a probabilistic basis. The proposed approach makes it possible to extract from the data information about the effect of fluctuations that would otherwise be lost using traditional methods based on high-statistics control samples. A feasibility study on Monte Carlo is presented, together with a comparison with existing techniques. Finally, the prospects for the development of tools for intensive offline analysis of individual events at the Large Hadron Collider are discussed.Comment: Updated according to the version published in J. Phys.: Conf. Ser. Minor changes have been made to the text with respect to the published article with a view to improving readabilit
    • …
    corecore