27,581 research outputs found

    A Bayesian Approach toward Active Learning for Collaborative Filtering

    Full text link
    Collaborative filtering is a useful technique for exploiting the preference patterns of a group of users to predict the utility of items for the active user. In general, the performance of collaborative filtering depends on the number of rated examples given by the active user. The more the number of rated examples given by the active user, the more accurate the predicted ratings will be. Active learning provides an effective way to acquire the most informative rated examples from active users. Previous work on active learning for collaborative filtering only considers the expected loss function based on the estimated model, which can be misleading when the estimated model is inaccurate. This paper takes one step further by taking into account of the posterior distribution of the estimated model, which results in more robust active learning algorithm. Empirical studies with datasets of movie ratings show that when the number of ratings from the active user is restricted to be small, active learning methods only based on the estimated model don't perform well while the active learning method using the model distribution achieves substantially better performance.Comment: Appears in Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence (UAI2004

    Estrogen Protects the Female Heart from Ischemia/Reperfusion Injury through Manganese Superoxide Dismutase Phosphorylation by Mitochondrial p38β at Threonine 79 and Serine 106.

    Get PDF
    A collective body of evidence indicates that estrogen protects the heart from myocardial ischemia/reperfusion (I/R) injury, but the underlying mechanism remains incompletely understood. We have previously delineated a novel mechanism of how 17β-estradiol (E2) protects cultured neonatal rat cardiomyocytes from hypoxia/reoxygenation (H/R) by identifying a functionally active mitochondrial pool of p38β and E2-driven upregulation of manganese superoxide dismutase (MnSOD) activity via p38β, leading to the suppression of reactive oxygen species (ROS) and apoptosis. Here we investigate these cytoprotective actions of E2 in vivo. Left coronary artery ligation and reperfusion was used to produce I/R injury in ovariectomized (OVX) female mice and in estrogen receptor (ER) null female mice. E2 treatment in OVX mice reduced the left ventricular infarct size accompanied by increased activity of mitochondrial p38β and MnSOD. I/R-induced infarct size in ERα knockout (ERKO), ERβ knockout (BERKO) and ERα and β double knockout (DERKO) female mice was larger than that in wild type (WT) mice, with little difference among ERKO, BERKO, and DERKO. Loss of both ERα and ERβ led to reduced activity of mitochondrial p38β and MnSOD at baseline and after I/R. The physical interaction between mitochondrial p38β and MnSOD in the heart was detected by co-immunoprecipitation (co-IP). Threonine 79 (T79) and serine 106 (S106) of MnSOD were identified to be phosphorylated by p38β in kinase assays. Overexpression of WT MnSOD in cardiomyocytes reduced ROS generation during H/R, while point mutation of T79 and S106 of MnSOD to alanine abolished its antioxidative function. We conclude that the protective effects of E2 and ER against cardiac I/R injury involve the regulation of MnSOD via posttranslational modification of the dismutase by p38β

    Building a Large Scale Dataset for Image Emotion Recognition: The Fine Print and The Benchmark

    Full text link
    Psychological research results have confirmed that people can have different emotional reactions to different visual stimuli. Several papers have been published on the problem of visual emotion analysis. In particular, attempts have been made to analyze and predict people's emotional reaction towards images. To this end, different kinds of hand-tuned features are proposed. The results reported on several carefully selected and labeled small image data sets have confirmed the promise of such features. While the recent successes of many computer vision related tasks are due to the adoption of Convolutional Neural Networks (CNNs), visual emotion analysis has not achieved the same level of success. This may be primarily due to the unavailability of confidently labeled and relatively large image data sets for visual emotion analysis. In this work, we introduce a new data set, which started from 3+ million weakly labeled images of different emotions and ended up 30 times as large as the current largest publicly available visual emotion data set. We hope that this data set encourages further research on visual emotion analysis. We also perform extensive benchmarking analyses on this large data set using the state of the art methods including CNNs.Comment: 7 pages, 7 figures, AAAI 201

    2.5D multi-view gait recognition based on point cloud registration

    Get PDF
    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM
    • …
    corecore