122 research outputs found

    Automatically learning topics and difficulty levels of problems in online judge systems

    Get PDF
    Online Judge (OJ) systems have been widely used in many areas, including programming, mathematical problems solving, and job interviews. Unlike other online learning systems, such as Massive Open Online Course, most OJ systems are designed for self-directed learning without the intervention of teachers. Also, in most OJ systems, problems are simply listed in volumes and there is no clear organization of them by topics or difficulty levels. As such, problems in the same volume are mixed in terms of topics or difficulty levels. By analyzing large-scale users’ learning traces, we observe that there are two major learning modes (or patterns). Users either practice problems in a sequential manner from the same volume regardless of their topics or they attempt problems about the same topic, which may spread across multiple volumes. Our observation is consistent with the findings in classic educational psychology. Based on our observation, we propose a novel two-mode Markov topic model to automatically detect the topics of online problems by jointly characterizing the two learning modes. For further predicting the difficulty level of online problems, we propose a competition-based expertise model using the learned topic information. Extensive experiments on three large OJ datasets have demonstrated the effectiveness of our approach in three different tasks, including skill topic extraction, expertise competition prediction and problem recommendation

    Angiotensin II diminishes the effect of SGK1 on the WNK4-mediated inhibition of ROMK1 channels

    Get PDF
    ROMK1 channels are located in the apical membrane of the connecting tubule and cortical collecting duct and mediate the potassium secretion during normal dietary intake. We used a perforated whole-cell patch clamp to explore the effect of angiotensin II on these channels in HEK293 cells transfected with green fluorescent protein (GFP)-ROMK1. Angiotensin II inhibited ROMK1 channels in a dose-dependent manner, an effect abolished by losartan or by inhibition of protein kinase C. Furthermore, angiotensin II stimulated a protein kinase C-sensitive phosphorylation of tyrosine 416 within c-Src. Inhibition of protein tyrosine kinase attenuated the effect of angiotensin II. Western blot studies suggested that angiotensin II inhibited ROMK1 channels by enhancing its tyrosine phosphorylation, a notion supported by angiotensin II's failure to inhibit potassium channels in cells transfected with the ROMK1 tyrosine mutant (R1Y337A). However, angiotensin II restored the with-no-lysine kinase-4 (WNK4)-induced inhibition of R1Y337A in the presence of serum–glucocorticoids-induced kinase 1 (SGK1), which reversed the inhibitory effect of WNK4 on ROMK1. Moreover, protein tyrosine kinase inhibition abolished the angiotensin II-induced restoration of WNK4-mediated inhibition of ROMK1. Angiotensin II inhibited ROMK channels in the cortical collecting duct of rats on a low sodium diet, an effect blocked by protein tyrosine kinase inhibition. Thus, angiotensin II inhibits ROMK channels by two mechanisms: increasing tyrosine phosphorylation of the channel and synergizing the WNK4-induced inhibition. Hence, angiotensin II may have an important role in suppressing potassium secretion during volume depletion

    3D Target Recognition Based on Decision Layer Fusion

    Get PDF

    Research on Robust Model for Web Service Selection

    Get PDF

    Optimality of the Approximation and Learning by the Rescaled Pure Super Greedy Algorithms

    No full text
    We propose the Weak Rescaled Pure Super Greedy Algorithm (WRPSGA) for approximation with respect to a dictionary D in Hilbert space. The WRPSGA is simpler than some popular greedy algorithms. We show that the convergence rate of the RPSGA on the closure of the convex hull of the μ-coherent dictionary D is optimal. Then, we design the Rescaled Pure Super Greedy Learning Algorithm (RPSGLA) for kernel-based supervised learning. We prove that the convergence rate of the RPSGLA can be arbitrarily close to the best rate O(m−1) under some mild assumptions
    • …
    corecore