131 research outputs found

    Multi-Task Learning Regression via Convex Clustering

    Full text link
    Multi-task learning (MTL) is a methodology that aims to improve the general performance of estimation and prediction by sharing common information among related tasks. In the MTL, there are several assumptions for the relationships and methods to incorporate them. One of the natural assumptions in the practical situation is that tasks are classified into some clusters with their characteristics. For this assumption, the group fused regularization approach performs clustering of the tasks by shrinking the difference among tasks. This enables us to transfer common information within the same cluster. However, this approach also transfers the information between different clusters, which worsens the estimation and prediction. To overcome this problem, we propose an MTL method with a centroid parameter representing a cluster center of the task. Because this model separates parameters into the parameters for regression and the parameters for clustering, we can improve estimation and prediction accuracy for regression coefficient vectors. We show the effectiveness of the proposed method through Monte Carlo simulations and applications to real data.Comment: 18 pages, 4 table

    Simultaneous Modeling of Disease Screening and Severity Prediction: A Multi-task and Sparse Regularization Approach

    Full text link
    Disease prediction is one of the central problems in biostatistical research. Some biomarkers are not only helpful in diagnosing and screening diseases but also associated with the severity of the diseases. It should be helpful to construct a prediction model that can estimate severity at the diagnosis or screening stage from perspectives such as treatment prioritization. We focus on solving the combined tasks of screening and severity prediction, considering a combined response variable such as \{healthy, mild, intermediate, severe\}. This type of response variable is ordinal, but since the two tasks do not necessarily share the same statistical structure, the conventional cumulative logit model (CLM) may not be suitable. To handle the composite ordinal response, we propose the Multi-task Cumulative Logit Model (MtCLM) with structural sparse regularization. This model is sufficiently flexible that can fit the different structures of the two tasks and capture their shared structure of them. In addition, MtCLM is valid as a stochastic model in the entire predictor space, unlike another conventional and flexible model, the non-parallel cumulative logit model (NPCLM). We conduct simulation experiments and real data analysis to illustrate the prediction performance and interpretability

    Bayesian generalized fused lasso modeling via NEG distribution

    Get PDF
    The fused lasso penalizes a loss function by the L1 norm for both the regression coefficients and their successive differences to encourage sparsity of both. In this paper, we propose a Bayesian generalized fused lasso modeling based on a normal-exponential-gamma (NEG) prior distribution. The NEG prior is assumed into the difference of successive regression coefficients. The proposed method enables us to construct a more versatile sparse model than the ordinary fused lasso using a flexible regularization term. Simulation studies and real data analyses show that the proposed method has superior performance to the ordinary fused lasso
    • …
    corecore