5,540 research outputs found

    Development and Validation of a Global Competency Framework for Preparing New Graduates for Early Career Professional Roles

    Get PDF
    Objectives: The current objectives include the development of a global competency model applicable across a wide range of jobs, industries, and geographies for university graduates entering the workplace. Method: The competency model was developed utilizing a global panel of subject matter experts and a validation survey of over 25,000 students, faculty, staff, and employers across more than 30 countries. Results: The results showed substantial consistency for the importance and criticality ratings of the competencies, with Achieving Objectives, Analyzing and Solving Problems, Adapting to Change, Communicating Orally, Learning and Self-Development, Making Decisions, Planning and Organizing, and Working Well with Others as the highest-rated competencies across regions, roles, and industries. Conclusions: The most important competencies for students entering the workforce were consistent across different jobs, different industries, and different countries. The diversity and varied experience levels of the sample provide greater generalizability than most competency modeling projects that are often idiosyncratic to specific roles, industries, subjects, or levels. Implication for Theory and/or Practice: University faculty and staff can use the results of the validation study to develop curricula and programs that will be better able to foster important competencies to ensure that their students are better prepared to enter the workplace. Although some organizations emphasize leadership as important for all professional employees, Managing the Work of Others, Leading Others, and Influencing Others were consistently rated lower in importance by employers across all roles and regions and may not be appropriate as the primary focus of skill development for new graduates

    Analyzing collaborative learning processes automatically

    Get PDF
    In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in

    Predictive Customer Lifetime value modeling: Improving customer engagement and business performance

    Get PDF
    CookUnity, a meal subscription service, has witnessed substantial annual revenue growth over the past three years. However, this growth has primarily been driven by the acquisition of new users to expand the customer base, rather than an evident increase in customers' spending levels. If it weren't for the raised subscription prices, the company's customer lifetime value (CLV) would have remained the same as it was three years ago. Consequently, the company's leadership recognizes the need to adopt a holistic approach to unlock an enhancement in CLV. The objective of this thesis is to develop a comprehensive understanding of CLV, its implications, and how companies leverage it to inform strategic decisions. Throughout the course of this study, our central focus is to deliver a fully functional and efficient machine learning solution to CookUnity. This solution will possess exceptional predictive capabilities, enabling accurate forecasting of each customer's future CLV. By equipping CookUnity with this powerful tool, our aim is to empower the company to strategically leverage CLV for sustained growth. To achieve this objective, we analyze various methodologies and approaches to CLV analysis, evaluating their applicability and effectiveness within the context of CookUnity. We thoroughly explore available data sources that can serve as predictors of CLV, ensuring the incorporation of the most relevant and meaningful variables in our model. Additionally, we assess different research methodologies to identify the top-performing approach and examine its implications for implementation at CookUnity. By implementing data-driven strategies based on our predictive CLV model, CookUnity will be able to optimize order levels and maximize the lifetime value of its customer base. The outcome of this thesis will be a robust ML solution with remarkable prediction accuracy and practical usability within the company. Furthermore, the insights gained from our research will contribute to a broader understanding of CLV in the subscription-based business context, stimulating further exploration and advancement in this field of study

    Validating a forced‑choice method for eliciting quality‑of‑reasoning judgments

    Get PDF
    In this paper we investigate the criterion validity of forced-choice comparisons of the quality of written arguments with normative solutions. Across two studies, novices and experts assessing quality of reasoning through a forced-choice design were both able to choose arguments supporting more accurate solutions—62.2% (SE = 1%) of the time for novices and 74.4% (SE = 1%) for experts—and arguments produced by larger teams—up to 82% of the time for novices and 85% for experts—with high inter-rater reliability, namely 70.58% (95% CI = 1.18) agreement for novices and 80.98% (95% CI = 2.26) for experts. We also explored two methods for increasing efficiency. We found that the number of comparative judgments needed could be substantially reduced with little accuracy loss by leveraging transitivity and producing quality-of-reasoning assessments using an AVL tree method. Moreover, a regression model trained to predict scores based on automatically derived linguistic features of participants’ judgments achieved a high correlation with the objective accuracy scores of the arguments in our dataset. Despite the inherent subjectivity involved in evaluating differing quality of reasoning, the forced-choice paradigm allows even novice raters to perform beyond chance and can provide a valid, reliable, and efficient method for producing quality-of-reasoning assessments at scale

    FASTCloud: A framework of assessment and selection for trustworthy cloud service based on QoS

    Full text link
    By virtue of technology and benefit advantages, cloud computing has increasingly attracted a large number of potential cloud consumers (PCC) plan to migrate the traditional business to the cloud service. However, trust has become one of the most challenging issues that prevent the PCC from adopting cloud services, especially in trustworthy cloud service selection. Besides, due to the diversity and dynamic of quality of service (QoS) in the cloud environment, the existing trust assessment methods based on the single constant value of QoS attribute and the subjective weight assignment are not good enough to provide an effective solution for PCCs to identify and select a trustworthy cloud service among a wide range of functionally-equivalent cloud service providers (CSPs). To address the challenge, a novel assessment and selection framework for trustworthy cloud service, FASTCloud, is proposed in this study. This framework facilitates PCCs to select a trustworthy cloud service based on their actual QoS requirements. In order to accurately and efficiently assess the trust level of cloud services, a QoS-based trust assessment model is proposed. This model represents a trust level assessment method based on the interval multiple attributes with an objective weight assignment method based on the deviation maximization to adaptively determine the trust level of different cloud services provisioned by candidate CSPs. The advantage of the proposed trust level assessment method in time complexity is demonstrated by the performance analysis and comparison. The experimental result of a case study with an open-source dataset shows that the trust model is efficient in cloud service trust assessment and the FASTCloud can effectively help PCCs select a trustworthy cloud service

    Estimating Defensive Cyber Operator Decision Confidence

    Get PDF
    As technology continues to advance the domain of cyber defense, signature and heuristic detection mechanisms continue to require human operators to make judgements about the correctness of machine decisions. Human cyber defense operators rely on their experience, expertise, and understanding of network security, when conducting cyber-based investigations, in order to detect and respond to cyber alerts. Ever growing quantities of cyber alerts and network traffic, coupled with systemic manpower issues, mean no one has the time to review or change decisions made by operators. Since these cyber alert decisions ultimately do not get reviewed again, an inaccurate decision could cause grave damage to the network and host systems. The Cyber Intruder Alert Testbed (CIAT), a synthetic task environment (STE), was expanded to include investigative pattern of behavior monitoring and confidence reporting capabilities. By analyzing the behavior and confidence of participants while they conducted cyber-based investigations, this research was able to identify a mapping between investigative patterns of behavior and decision confidence. The total time spent on a decision, the time spent using different investigative tools, and total number of tool transitions, were all factors which influenced the reported confidence of participants when conducting cyber-based investigations

    Causal Inference under Data Restrictions

    Full text link
    This dissertation focuses on modern causal inference under uncertainty and data restrictions, with applications to neoadjuvant clinical trials, distributed data networks, and robust individualized decision making. In the first project, we propose a method under the principal stratification framework to identify and estimate the average treatment effects on a binary outcome, conditional on the counterfactual status of a post-treatment intermediate response. Under mild assumptions, the treatment effect of interest can be identified. We extend the approach to address censored outcome data. The proposed method is applied to a neoadjuvant clinical trial and its performance is evaluated via simulation studies. In the second project, we propose a tree-based model averaging approach to improve the estimation accuracy of conditional average treatment effects at a target site by leveraging models derived from other potentially heterogeneous sites, without them sharing subject-level data. The performance of this approach is demonstrated by a study of the causal effects of oxygen therapy on hospital survival rates and backed up by comprehensive simulations. In the third project, we propose a robust individualized decision learning framework with sensitive variables to improve the worst-case outcomes of individuals caused by sensitive variables that are unavailable at the time of decision. Unlike most existing work that uses mean-optimal objectives, we propose a robust learning framework by finding a newly defined quantile- or infimum-optimal decision rule. From a causal perspective, we also generalize the classic notion of (average) fairness to conditional fairness for individual subjects. The reliable performance of the proposed method is demonstrated through synthetic experiments and three real-data applications.Comment: PhD dissertation, University of Pittsburgh. The contents are mostly based on arXiv:2211.06569, arXiv:2103.06261 and arXiv:2103.04175 with extended discussion
    • …
    corecore