666,469 research outputs found

    Worrying and rumination are both associated with reduced cognitive control

    Get PDF
    Persistent negative thought is a hallmark feature of both major depressive disorder and generalized anxiety disorder. Despite its clinical significance, little is known about the underlying mechanisms of persistent negative thought. Recent studies suggest that reduced cognitive control might be an explanatory factor. We investigated the association between persistent negative thought and switching between internal representations in working memory, using the internal shift task (IST). The IST was administered to a group of undergraduates, classified as high-ruminators versus low-ruminators, or high-worriers versus low-worriers. Results showed that high-ruminators and high-worriers have more difficulties to switch between internal representations in working memory as opposed to low-ruminators and low-worriers. Importantly, results were only significant when the negative stimuli used in the IST reflected personally relevant worry themes for the participants. The results of this study indicate that rumination and worrying are both associated with reduced cognitive control for verbal information that is personally relevant

    Inferring short-term volatility indicators from Bitcoin blockchain

    Full text link
    In this paper, we study the possibility of inferring early warning indicators (EWIs) for periods of extreme bitcoin price volatility using features obtained from Bitcoin daily transaction graphs. We infer the low-dimensional representations of transaction graphs in the time period from 2012 to 2017 using Bitcoin blockchain, and demonstrate how these representations can be used to predict extreme price volatility events. Our EWI, which is obtained with a non-negative decomposition, contains more predictive information than those obtained with singular value decomposition or scalar value of the total Bitcoin transaction volume

    A deep matrix factorization method for learning attribute representations

    Get PDF
    Semi-Non-negative Matrix Factorization is a technique that learns a low-dimensional representation of a dataset that lends itself to a clustering interpretation. It is possible that the mapping between this new representation and our original data matrix contains rather complex hierarchical information with implicit lower-level hidden attributes, that classical one level clustering methodologies can not interpret. In this work we propose a novel model, Deep Semi-NMF, that is able to learn such hidden representations that allow themselves to an interpretation of clustering according to different, unknown attributes of a given dataset. We also present a semi-supervised version of the algorithm, named Deep WSF, that allows the use of (partial) prior information for each of the known attributes of a dataset, that allows the model to be used on datasets with mixed attribute knowledge. Finally, we show that our models are able to learn low-dimensional representations that are better suited for clustering, but also classification, outperforming Semi-Non-negative Matrix Factorization, but also other state-of-the-art methodologies variants.Comment: Submitted to TPAMI (16-Mar-2015

    Unsupervised Path Representation Learning with Curriculum Negative Sampling

    Get PDF
    Path representations are critical in a variety of transportation applications, such as estimating path ranking in path recommendation systems and estimating path travel time in navigation systems. Existing studies often learn task-specific path representations in a supervised manner, which require a large amount of labeled training data and generalize poorly to other tasks. We propose an unsupervised learning framework Path InfoMax (PIM) to learn generic path representations that work for different downstream tasks. We first propose a curriculum negative sampling method, for each input path, to generate a small amount of negative paths, by following the principles of curriculum learning. Next, \emph{PIM} employs mutual information maximization to learn path representations from both a global and a local view. In the global view, PIM distinguishes the representations of the input paths from those of the negative paths. In the local view, \emph{PIM} distinguishes the input path representations from the representations of the nodes that appear only in the negative paths. This enables the learned path representations to encode both global and local information at different scales. Extensive experiments on two downstream tasks, ranking score estimation and travel time estimation, using two road network datasets suggest that PIM significantly outperforms other unsupervised methods and is also able to be used as a pre-training method to enhance supervised path representation learning.Comment: This paper has been accepted by IJCAI-2

    Semantic Relation Classification via Convolutional Neural Networks with Simple Negative Sampling

    Full text link
    Syntactic features play an essential role in identifying relationship in a sentence. Previous neural network models often suffer from irrelevant information introduced when subjects and objects are in a long distance. In this paper, we propose to learn more robust relation representations from the shortest dependency path through a convolution neural network. We further propose a straightforward negative sampling strategy to improve the assignment of subjects and objects. Experimental results show that our method outperforms the state-of-the-art methods on the SemEval-2010 Task 8 dataset
    • …
    corecore