10,337 research outputs found
Recommended from our members
Stemming the flow: improving retention for distance learning students
Though concern about student attrition and failure is not a new phenomenon, higher education institutions (HEIs) have struggled to significantly reduce the revolving door syndrome. Open distance learning higher education is particularly susceptible to high student attrition. Despite a great deal of research into the student journey and factors impacting on likely success, we are not necessarily closer to understanding and being able to mitigate against student attrition. Learning analytics as emerging discipline and practice promises to help penetrate the fog…
This case study describes work undertaken at the Open University in the UK to investigate how a learning analytics approach allows the University to provide timely and appropriate student support in a cost-effective manner. It includes a summary of the establishment of curriculum-based student support teams and a framework which defines more standardised student support informed by both student data and an enhanced knowledge of the curriculum. The primary aim of student support teams is to proactively support students through their study journey and to optimise their chances of reaching their declared study goals.
Higher education institutions (HEIs) are making increasing use of learning analytics to support delivery of timely and relevant student support. The Open University in the UK, like other HEIs, knows a great deal about its students before they start to study and is able to track student behaviours once study has begun. Until recently, the university has not taken full advantage of the additional insight offered by such information. This paper describes the framework of support interventions established for all student support teams and describes the learning analytics approach used to support that framework
Two-Phase Defect Detection Using Clustering and Classification Methods
Autonomous fault management of network and distributed systems is a challenging research problem and attracts many research activities. Solving this problem heavily depends on expertise knowledge and supporting tools for monitoring and detecting defects automatically. Recent research activities have focused on machine learning techniques that scrutinize system output data for mining abnormal events and detecting defects. This paper proposes a two-phase defect detection for network and distributed systems using log messages clustering and classification. The approach takes advantage of K-means clustering method to obtain abnormal messages and random forest method to detect the relationship of the abnormal messages and the existing defects. Several experiments have evaluated the performance of this approach using the log message data of Hadoop Distributed File System (HDFS) and the bug report data of Bug Tracking System (BTS). Evaluation results have disclosed some remarks with lessons learned
Reports Of Conferences, Institutes, And Seminars
This quarter\u27s column offers coverage of multiple sessions from the 2016 Electronic Resources & Libraries (ER&L) Conference, held April 3–6, 2016, in Austin, Texas. Topics in serials acquisitions dominate the column, including reports on altmetrics, cost per use, demand-driven acquisitions, and scholarly communications and the use of subscriptions agents; ERMS, access, and knowledgebases are also featured
Mining Explainable Predictive Features for Water Quality Management
With water quality management processes, identifying and interpreting
relationships between features, such as location and weather variable tuples,
and water quality variables, such as levels of bacteria, is key to gaining
insights and identifying areas where interventions should be made. There is a
need for a search process to identify the locations and types of phenomena that
are influencing water quality and a need to explain how the quality is being
affected and which factors are most relevant. This paper addresses both of
these issues. A process is developed for collecting data for features that
represent a variety of variables over a spatial region and which are used for
training models and inference. An analysis of the performance of the features
is undertaken using the models and Shapley values. Shapley values originated in
cooperative game theory and can be used to aid in the interpretation of machine
learning results. Evaluations are performed using several machine learning
algorithms and water quality data from the Dublin Grand Canal basin
Review and Analysis of Failure Detection and Prevention Techniques in IT Infrastructure Monitoring
Maintaining the health of IT infrastructure components for improved reliability and availability is a research and innovation topic for many years. Identification and handling of failures are crucial and challenging due to the complexity of IT infrastructure. System logs are the primary source of information to diagnose and fix failures.
In this work, we address three essential research dimensions about failures, such as the need for failure handling in IT infrastructure, understanding the contribution of system-generated log in failure detection and reactive & proactive approaches used to deal with failure situations.
This study performs a comprehensive analysis of existing literature by considering three prominent aspects as log preprocessing, anomaly & failure detection, and failure prevention.
With this coherent review, we (1) presume the need for IT infrastructure monitoring to avoid downtime, (2) examine the three types of approaches for anomaly and failure detection such as a rule-based, correlation method and classification, and (3) fabricate the recommendations for researchers on further research guidelines.
As far as the authors\u27 knowledge, this is the first comprehensive literature review on IT infrastructure monitoring techniques. The review has been conducted with the help of meta-analysis and comparative study of machine learning and deep learning techniques. This work aims to outline significant research gaps in the area of IT infrastructure failure detection. This work will help future researchers understand the advantages and limitations of current methods and select an adequate approach to their problem
Investigating the impact of health analytics on the cost and quality of care for patients with heart failure
The healthcare industry is under tremendous pressure to improve the quality of care and provide more patient centric care, while reducing costs. The potential use of data analytics to address these health system issues has raised significant interest in both research and practice. Health Analytics is central to informing and realizing the systematic quality improvements and cost reductions required by healthcare reform. Fundamentally, the contribution of IS and analytics research in healthcare is to identify and study the impact of interventions that can make a significant difference to the quality and cost of care. This dissertation is concentrated on patients with heart failure (HF). HF is the number one killer in the world, and is the largest contributor to healthcare costs in the United States. Moreover, HF is one of the six conditions used by the Centers for Medicare and Medicaid Services (CMS) to exercise fiduciary control over health systems by monitoring both the quality and cost of care. Specifically, my larger research question is “How can we identify and inform impactful transition of care interventions that manage costs and improve resource allocation efficiencies while providing improved quality of care for heart failure patients?” We adopted a mixed-method approach to study the impact of transitional care in a healthcare system for patients with heart failure. This dissertation includes three essays. In the first essay, I use qualitative methods to study the nature, sources and impacts of information coordination problems as HF patients’ transition through the patient flow in a health system. I propose a set of interventions based on my analysis of information and control errors along the continuum of care to inform the design of appropriate interventions that improve the cost and quality of care. In the second essay, I empirically evaluate the impact of these interventions on cost and quality of care measures such as all cause readmissions, heart failure readmissions, ER visits, length of stay, and cost of care. Analysis suggests that multicomponent complex transitional interventions have significant impact on reducing 30-day readmission and ER visits. The third essay is dedicated to understanding the impact of heart failure patient’s self-care behaviors. I developed and validated an assessment tool for patients with heart failure to monitor and score their condition accurately. Together, these essays investigate impactful transition of care interventions that can help healthcare organizations improve quality of care and manage costs from the clinical, administrative and patient perspectives
- …