668 research outputs found

    Fast optimization of Multithreshold Entropy Linear Classifier

    Get PDF
    Multithreshold Entropy Linear Classifier (MELC) is a density based model which searches for a linear projection maximizing the Cauchy-Schwarz Divergence of dataset kernel density estimation. Despite its good empirical results, one of its drawbacks is the optimization speed. In this paper we analyze how one can speed it up through solving an approximate problem. We analyze two methods, both similar to the approximate solutions of the Kernel Density Estimation querying and provide adaptive schemes for selecting a crucial parameters based on user-specified acceptable error. Furthermore we show how one can exploit well known conjugate gradients and L-BFGS optimizers despite the fact that the original optimization problem should be solved on the sphere. All above methods and modifications are tested on 10 real life datasets from UCI repository to confirm their practical usability.Comment: Presented at Theoretical Foundations of Machine Learning 2015 (http://tfml.gmum.net), final version published in Schedae Informaticae Journa

    Delinquency and Crime Prevention: Overview of Research Comparing Treatment Foster Care and Group Care

    Get PDF
    Background: Evidence of treatment foster care (TFC) and group care’s (GC) potential to prevent delinquency and crime has been developing. Objectives: We clarified the state of comparative knowledge with a historical overview. Then we explored the hypothesis that smaller, probably better resourced group homes with smaller staff/resident ratios have greater impacts than larger homes with a meta-analytic update. Methods: Research literatures were searched to 2015. Five systematic reviews were selected that included seven independent studies that compared delinquency or crime outcomes among youths ages 10–18. A similar search augmented by author and bibliographic searches identified six additional studies with an updated meta-analysis. Discrete effects were analyzed with sample-weighted preventive fractions (PF) and 95 % confidence intervals (CI). Results: Compared with GC, TFC was estimated to prevent nearly half of delinquent or criminal acts over 1–3 years (PF = 0.56, 95 % CI 0.50, 0.64). Two pooled study outcomes tentatively suggested that GC in homes with less than ten youths may prevent delinquency and crime better than TFC, p = 0.08. Study designs were non-equivalent or randomized trials that were typically too small to ensure controlled comparisons. Conclusions: These synthetic findings are best thought of as preliminary hypotheses. Confident knowledge will require their testing with large, perhaps multisite, controlled trials. Such a research agenda will undoubtedly be quite expensive, but it holds the promise of knowledge dividends that could prevention much suffering among youths, their families and society

    Ask the GRU: Multi-Task Learning for Deep Text Recommendations

    Full text link
    In a variety of application domains the content to be recommended to users is associated with text. This includes research papers, movies with associated plot summaries, news articles, blog posts, etc. Recommendation approaches based on latent factor models can be extended naturally to leverage text by employing an explicit mapping from text to factors. This enables recommendations for new, unseen content, and may generalize better, since the factors for all items are produced by a compactly-parametrized model. Previous work has used topic models or averages of word embeddings for this mapping. In this paper we present a method leveraging deep recurrent neural networks to encode the text sequence into a latent vector, specifically gated recurrent units (GRUs) trained end-to-end on the collaborative filtering task. For the task of scientific paper recommendation, this yields models with significantly higher accuracy. In cold-start scenarios, we beat the previous state-of-the-art, all of which ignore word order. Performance is further improved by multi-task learning, where the text encoder network is trained for a combination of content recommendation and item metadata prediction. This regularizes the collaborative filtering model, ameliorating the problem of sparsity of the observed rating matrix.Comment: 8 page
    corecore