12,837 research outputs found

    Substance use disorder and posttraumatic stress disorder symptomology on behavioral outcomes among juvenile justice youth

    Get PDF
    BACKGROUND AND OBJECTIVES: Substance use behaviors have been identified as a risk factor that places juveniles at greater risk for engaging in delinquent behaviors and continual contact with the juvenile justice system. Currently, there is lack of research that explores comorbid factors associated with substance use, such as post-traumatic stress disorder (PTSD) symptoms, that could help identify youth who are at greatest risk. The aim of the present study was to examine if PTSD symptomology moderated the relationship between substance use disorder (SUD) symptoms and externalizing behaviors and commission of a violent crime; hypothesizing that risk would be heightened among youth with elevated SUD and PTSD symptomology compared to those with elevated SUD symptoms but lower PTSD symptoms. METHOD: The study included 194 predominantly male (78.4%), non-White (74.2%) juvenile justice youth between the ages of 9-18 (M = 15.36). Youth provided responses to assess PTSD symptoms, SUD symptoms, and externalizing behaviors. Commission of a violent crime was based on parole officer report. RESULTS: Findings indicated that SUD symptomology was associated with greater externalizing behaviors at high levels of PTSD symptomology. At low levels of PTSD symptomology, SUD symptoms were inversely associated with externalizing behaviors. An interactive relationship was not observed for commission of violent crimes. CONCLUSIONS: Findings suggest that the association between SUD symptoms and externalizing behaviors among juvenile offenders may be best explained by the presence of PTSD symptomology. SCIENTIFIC SIGNIFICANCE: Addressing PTSD rather than SUD symptoms may be a better target for reducing risk for externalizing behaviors among this population of youth (Am J Addict 2019;28:29-35)

    Type-Constrained Representation Learning in Knowledge Graphs

    Full text link
    Large knowledge graphs increasingly add value to various applications that require machines to recognize and understand queries and their semantics, as in search or question answering systems. Latent variable models have increasingly gained attention for the statistical modeling of knowledge graphs, showing promising results in tasks related to knowledge graph completion and cleaning. Besides storing facts about the world, schema-based knowledge graphs are backed by rich semantic descriptions of entities and relation-types that allow machines to understand the notion of things and their semantic relationships. In this work, we study how type-constraints can generally support the statistical modeling with latent variable models. More precisely, we integrated prior knowledge in form of type-constraints in various state of the art latent variable approaches. Our experimental results show that prior knowledge on relation-types significantly improves these models up to 77% in link-prediction tasks. The achieved improvements are especially prominent when a low model complexity is enforced, a crucial requirement when these models are applied to very large datasets. Unfortunately, type-constraints are neither always available nor always complete e.g., they can become fuzzy when entities lack proper typing. We show that in these cases, it can be beneficial to apply a local closed-world assumption that approximates the semantics of relation-types based on observations made in the data

    Reciprocal Recommender System for Learners in Massive Open Online Courses (MOOCs)

    Get PDF
    Massive open online courses (MOOC) describe platforms where users with completely different backgrounds subscribe to various courses on offer. MOOC forums and discussion boards offer learners a medium to communicate with each other and maximize their learning outcomes. However, oftentimes learners are hesitant to approach each other for different reasons (being shy, don't know the right match, etc.). In this paper, we propose a reciprocal recommender system which matches learners who are mutually interested in, and likely to communicate with each other based on their profile attributes like age, location, gender, qualification, interests, etc. We test our algorithm on data sampled using the publicly available MITx-Harvardx dataset and demonstrate that both attribute importance and reciprocity play an important role in forming the final recommendation list of learners. Our approach provides promising results for such a system to be implemented within an actual MOOC.Comment: 10 pages, accepted as full paper @ ICWL 201

    Charge Symmetry Breaking and QCD

    Full text link
    Charge symmetry breaking (CSB) in the strong interaction occurs because of the difference between the masses of the up and down quarks. The use of effective field theories allows us to follow this influence of confined quarks in hadronic and nuclear systems. The progress in observing and understanding CSB is reviewed with particular attention to the recent successful observations of CSB in measurements involving the production of a single neutral pion and to the related theoretical progress.Comment: 41 pages, 10 figures, for Nov. 2006 edition Annual Review of Nuclear and Particle Physic

    Measuring Accuracy of Triples in Knowledge Graphs

    Get PDF
    An increasing amount of large-scale knowledge graphs have been constructed in recent years. Those graphs are often created from text-based extraction, which could be very noisy. So far, cleaning knowledge graphs are often carried out by human experts and thus very inefficient. It is necessary to explore automatic methods for identifying and eliminating erroneous information. In order to achieve this, previous approaches primarily rely on internal information i.e. the knowledge graph itself. In this paper, we introduce an automatic approach, Triples Accuracy Assessment (TAA), for validating RDF triples (source triples) in a knowledge graph by finding consensus of matched triples (among target triples) from other knowledge graphs. TAA uses knowledge graph interlinks to find identical resources and apply different matching methods between the predicates of source triples and target triples. Then based on the matched triples, TAA calculates a confidence score to indicate the correctness of a source triple. In addition, we present an evaluation of our approach using the FactBench dataset for fact validation. Our findings show promising results for distinguishing between correct and wrong triples

    Keystroke Inference Using Smartphone Kinematics

    Get PDF
    The use of smartphones is becoming ubiquitous in modern society, these very personal devices store large amounts of personal information and we use these devices to access everything from our bank to our social networks, we communicate using these devices in both open one-to-many communications and in more closed, private one-to-one communications. In this paper we have created a method to infer what is typed on a device purely from how the device moves in the user’s hand. With very small amounts of training data (less than the size of a tweet) we are able to predict the text typed on a device with accuracies of up to 90%. We found no effect on this accuracy from how fast users type, how comfortable they are using smartphone keyboards or how the device was held in the hand. It is trivial to create an application that can access the motion data of a phone whilst a user is engaged in other applications, the accessing of motion data does not require any permission to be granted by the user and hence represents a tangible threat to smartphone users

    'Part'ly first among equals: Semantic part-based benchmarking for state-of-the-art object recognition systems

    Full text link
    An examination of object recognition challenge leaderboards (ILSVRC, PASCAL-VOC) reveals that the top-performing classifiers typically exhibit small differences amongst themselves in terms of error rate/mAP. To better differentiate the top performers, additional criteria are required. Moreover, the (test) images, on which the performance scores are based, predominantly contain fully visible objects. Therefore, `harder' test images, mimicking the challenging conditions (e.g. occlusion) in which humans routinely recognize objects, need to be utilized for benchmarking. To address the concerns mentioned above, we make two contributions. First, we systematically vary the level of local object-part content, global detail and spatial context in images from PASCAL VOC 2010 to create a new benchmarking dataset dubbed PPSS-12. Second, we propose an object-part based benchmarking procedure which quantifies classifiers' robustness to a range of visibility and contextual settings. The benchmarking procedure relies on a semantic similarity measure that naturally addresses potential semantic granularity differences between the category labels in training and test datasets, thus eliminating manual mapping. We use our procedure on the PPSS-12 dataset to benchmark top-performing classifiers trained on the ILSVRC-2012 dataset. Our results show that the proposed benchmarking procedure enables additional differentiation among state-of-the-art object classifiers in terms of their ability to handle missing content and insufficient object detail. Given this capability for additional differentiation, our approach can potentially supplement existing benchmarking procedures used in object recognition challenge leaderboards.Comment: Extended version of our ACCV-2016 paper. Author formatting modifie

    Evaluating Maintainability Prejudices with a Large-Scale Study of Open-Source Projects

    Full text link
    Exaggeration or context changes can render maintainability experience into prejudice. For example, JavaScript is often seen as least elegant language and hence of lowest maintainability. Such prejudice should not guide decisions without prior empirical validation. We formulated 10 hypotheses about maintainability based on prejudices and test them in a large set of open-source projects (6,897 GitHub repositories, 402 million lines, 5 programming languages). We operationalize maintainability with five static analysis metrics. We found that JavaScript code is not worse than other code, Java code shows higher maintainability than C# code and C code has longer methods than other code. The quality of interface documentation is better in Java code than in other code. Code developed by teams is not of higher and large code bases not of lower maintainability. Projects with high maintainability are not more popular or more often forked. Overall, most hypotheses are not supported by open-source data.Comment: 20 page

    Experiments on enlarging a lexical ontology

    Get PDF
    This paper presents two simple experiments performed in order to enlarge the coverage of PULO, a Lexical Ontology, based and aligned with the Princeton WordNet. The first experiment explores the triangulation of the Galician, Catalan and Castillian wordnets, with translation dictionaries from the Apertium project. The second, explores Dicionário-Aberto entries, in order to extract synsets from its definitions. Although similar approaches were already applied for different languages, this document aims at documenting their results for the PULO case

    Cognitively-inspired Agent-based Service Composition for Mobile & Pervasive Computing

    Full text link
    Automatic service composition in mobile and pervasive computing faces many challenges due to the complex and highly dynamic nature of the environment. Common approaches consider service composition as a decision problem whose solution is usually addressed from optimization perspectives which are not feasible in practice due to the intractability of the problem, limited computational resources of smart devices, service host's mobility, and time constraints to tailor composition plans. Thus, our main contribution is the development of a cognitively-inspired agent-based service composition model focused on bounded rationality rather than optimality, which allows the system to compensate for limited resources by selectively filtering out continuous streams of data. Our approach exhibits features such as distributedness, modularity, emergent global functionality, and robustness, which endow it with capabilities to perform decentralized service composition by orchestrating manifold service providers and conflicting goals from multiple users. The evaluation of our approach shows promising results when compared against state-of-the-art service composition models.Comment: This paper will appear on AIMS'19 (International Conference on Artificial Intelligence and Mobile Services) on June 2
    • …
    corecore