315 research outputs found

    Are social inequalities being transmitted through higher education?:A propensity-score matching analysis of private versus public university graduates using machine learning models

    Get PDF
    This study investigates differences in employment outcomes of students graduating from private versus public universities in Spain, and the resulting impact on employment outcomes. The methodology involves propensity score matching, utilising novel machine learning approaches. Machine learning algorithms can be used to calculate propensity scores and can potentially have advantages compared to conventional methods. Contrary to previous research carried out in Spain, this analysis found a wage premium for those pupils who attended a private university in the short and medium term, although these differences were relatively small. The discussion outlines the implications for intergenerational inequality, policy development, and future research that utilises machine learning algorithms

    Are social inequalities being transmitted through higher education?:A propensity-score matching analysis of private versus public university graduates using machine learning models

    Get PDF
    This study investigates differences in employment outcomes of students graduating from private versus public universities in Spain, and the resulting impact on employment outcomes. The methodology involves propensity score matching, utilising novel machine learning approaches. Machine learning algorithms can be used to calculate propensity scores and can potentially have advantages compared to conventional methods. Contrary to previous research carried out in Spain, this analysis found a wage premium for those pupils who attended a private university in the short and medium term, although these differences were relatively small. The discussion outlines the implications for intergenerational inequality, policy development, and future research that utilises machine learning algorithms

    Computing policy parameters for stochastic inventory control using stochastic dynamic programming approaches

    Get PDF
    The objective of this work is to introduce techniques for the computation of optimal and near-optimal inventory control policy parameters for the stochastic inventory control problem under Scarf’s setting. A common aspect of the solutions presented herein is the usage of stochastic dynamic programming approaches, a mathematical programming technique introduced by Bellman. Stochastic dynamic programming is hybridised with branch-and-bound, binary search, constraint programming and other computational techniques to develop innovative and competitive solutions. In this work, the classic single-item, single location-inventory control with penalty cost under the independent stochastic demand is extended to model a fixed review cost. This cost is charged when the inventory level is assessed at the beginning of a period. This operation is costly in practice and including it can lead to significant savings. This makes it possible to model an order cancellation penalty charge. The first contribution hereby presented is the first stochastic dynamic program- ming that captures Bookbinder and Tan’s static-dynamic uncertainty control policy with penalty cost. Numerous techniques are available in the literature to compute such parameters; however, they all make assumptions on the de- mand probability distribution. This technique has many similarities to Scarf’s stochastic dynamic programming formulation, and it does not require any ex- ternal solver to be deployed. Memoisation and binary search techniques are deployed to improve computational performances. Extensive computational studies show that this new model has a tighter optimality gap compared to the state of the art. The second contribution is to introduce the first procedure to compute cost- optimal parameters for the well-known (R, s, S) policy. Practitioners widely use such a policy; however, the determination of its parameters is considered com- putationally prohibitive. A technique that hybridises stochastic dynamic pro- gramming and branch-and-bound is presented, alongside with computational enhancements. Computing the optimal policy allows the determination of op- timality gaps for future heuristics. This approach can solve instances of consid- erable size, making it usable by practitioners. The computational study shows the reduction of the cost that such a system can provide. Thirdly, this work presents the first heuristics for determining the near-optimal parameters for the (R,s,S) policy. The first is an algorithm that formally models the (R,s,S) policy computation in the form of a functional equation. The second is a heuristic formed by a hybridisation of (R,S) and (s,S) policy parameters solvers. These heuristics can compute near-optimal parameters in a fraction of time compared to the exact methods. They can be used to speed up the optimal branch-and-bound technique. The last contribution is the introduction of a technique to encode dynamic programming in constraint programming. Constraint programming provides the user with an expressive modelling language and delegates the search for the solution to a specific solver. The possibility to seamlessly encode dynamic programming provides new modelling options, e.g. the computation of optimal (R,s,S) policy parameters. The performances in this specific application are not competitive with the other techniques proposed herein; however, this encoding opens up new connections between constraint programming and dynamic programming. The encoding allows deploying DP based constraints in modelling languages such as MiniZinc. The computational study shows how this technique can outperform a similar encoding for mixed-integer programming

    Uncertainty-Aware Workload Prediction in Cloud Computing

    Full text link
    Predicting future resource demand in Cloud Computing is essential for managing Cloud data centres and guaranteeing customers a minimum Quality of Service (QoS) level. Modelling the uncertainty of future demand improves the quality of the prediction and reduces the waste due to overallocation. In this paper, we propose univariate and bivariate Bayesian deep learning models to predict the distribution of future resource demand and its uncertainty. We design different training scenarios to train these models, where each procedure is a different combination of pretraining and fine-tuning steps on multiple datasets configurations. We also compare the bivariate model to its univariate counterpart training with one or more datasets to investigate how different components affect the accuracy of the prediction and impact the QoS. Finally, we investigate whether our models have transfer learning capabilities. Extensive experiments show that pretraining with multiple datasets boosts performances while fine-tuning does not. Our models generalise well on related but unseen time series, proving transfer learning capabilities. Runtime performance analysis shows that the models are deployable in real-world applications. For this study, we preprocessed twelve datasets from real-world traces in a consistent and detailed way and made them available to facilitate the research in this field

    Targeting Brutons Tyrosine Kinase in Chronic Lymphocytic Leukemia at the Crossroad between Intrinsic and Extrinsic Pro-survival Signals

    Get PDF
    Chemo immunotherapies for chronic lymphocytic leukemia (CLL) showed a positive impact on clinical outcome, but many patients relapsed or become refractory to the available treatments. The main goal of the researchers in CLL is the identification of specific targets in order to develop new therapeutic strategies to cure the disease. The B cell receptor-signalling pathway is necessary for survival of malignant B cells and its related molecules recently become new targets for therapy. Moreover, leukemic microenvironment delivers survival signals to neoplastic cells also overcoming the apoptotic effect induced by traditional drugs. In this context, the investigation of Bruton\u2019s tyrosine kinase (Btk) is useful in: i) dissecting CLL pathogenesis; ii) finding new therapeutic approaches striking simultaneously intrinsic as well as extrinsic pro-survival signals in CLL. This paper will review these main topics

    Modelling dynamic programming-based global constraints in constraint programming

    Get PDF
    Dynamic Programming (DP) can solve many complex problems in polynomial or pseudo-polynomial time, and it is widely used in Constraint Programming (CP) to implement powerful global constraints. Implementing such constraints is a nontrivial task beyond the capability of most CP users, who must rely on their CP solver to provide an appropriate global constraint library. This also limits the usefulness of generic CP languages, some or all of whose solvers might not provide the required constraints. A technique was recently introduced for directly modelling DP in CP, which provides a way around this problem. However, no comparison of the technique with other approaches was made, and it was missing a clear formalisation. In this paper we formalise the approach and compare it with existing techniques on MiniZinc benchmark problems, including the flow formulation of DP in Integer Programming. We further show how it can be improved by state reduction methods

    Stochastic dynamic programming heuristic for the (R, s, S) policy parameters computation

    Get PDF
    The (R, s, S) is a stochastic inventory control policy widely used by practitioners. In an inventory system managed according to this policy, the inventory is reviewed at instant R; if the observed inventory position is lower than the reorder level s an order is placed. The order's quantity is set to raise the inventory position to the order-up-to-level S. This paper introduces a new stochastic dynamic program (SDP) based heuristic to compute the (R, s, S) policy parameters for the non-stationary stochastic lot-sizing problem with backlogging of the excessive demand, fixed order and review costs, and linear holding and penalty costs. In a recent work, Visentin et al. (2021) present an approach to compute optimal policy parameters under these assumptions. Our model combines a greedy relaxation of the problem with a modified version of Scarf's (s, S) SDP. A simple implementation of the model requires a prohibitive computational effort to compute the parameters. However, we can speed up the computations by using K-convexity property and memorisation techniques. The resulting algorithm is considerably faster than the state-of-the-art, extending its adoptability by practitioners. An extensive computational study compares our approach with the algorithms available in the literature
    • …
    corecore