292 research outputs found

    Stateful Detection in High Throughput Distributed Systems

    Get PDF
    With the increasing speed of computers, complexity of applications and large scale of applications, many of today’s distributed systems exchange data at a high rate. It is important to provide error detection capabilities to such applications that provide critical functionality. Significant prior work has been done in software implemented error detection achieved through a fault tolerance system separate from the application system. However, the high rate of data coupled with complex detection can cause the capacity of the fault tolerance system to be exhausted resulting in low detection accuracy. This is particularly the case when the detection is done against rules based on state that has been generated in the system. We present a new stateful detection mechanism which is based on observing messages exchanged between the protocol participants, deducing the application state from them, and matching against anomaly based rules. We have previously shown the capacity constraint of the detection framework called the Monitor. Here we extend the Monitor framework to incorporate a sampling approach which adjusts the rate of messages to be verified by sampling the incoming application stream of messages. The adjustment is such that the breakdown in the Monitor capacity is avoided. The cost of processing each message increases because the application state is no longer accurately known at the Monitor. However, the overall detection cost is reduced due to the lower rate of messages processed. We show that even with sampling, the Monitor is able to track the possible state of the protocol entity and provide stateful detection. We implement the approach and apply it to a reliable multicast protocol called TRAM. We demonstrate the gains of the approach by comparing the latency and accuracy of fault detection to the baseline Monitor system

    Workplace-based assessment in clinical radiology in the UK - a validity study

    Get PDF
    In 2010, the Royal College of Radiologists introduced workplace-based assessments to the postgraduate training pathway for clinical radiologists in the UK. Whilst the system served the purpose of contributing to high-stakes annual judgements about radiology trainees’ progression into subsequent years of training, it was primarily intended to be formative. This study was prompted by an interest in whether the new system fulfilled this formative role. Data collection and analysis spanned the first three years of the new system and followed a multi-methods approach. Descriptive statistical analysis was used to explore important parameters such as the timing and number of assessments undertaken by trainees and assessors. Using the literature and an iterative analysis of a large sample of trainee data, a coding framework for categories of feedback quality enabled assessors’ written comments to be explored using deductive and inductive qualitative analysis, with inferential statistical analysis of coded assessor feedback statements. For example, Ragin’s (1987, 2000, 2008) qualitative comparative analysis, QCA, was used to explore whether the assessments met necessary and/or sufficient conditions for high quality feedback. Pairs of assessor-trainee feedback comments were also analysed to establish whether any dialogic feedback interactions occurred. The study presents evidence that despite its intentions, the new system is generally failing to meet its primary, formative aim. As a consequence, the influence of negative washback on assessment practice was reflected in a number of findings. For example, there was evidence of trainees taking an instrumental approach to the assessments, undertaking only the prescribed minimum of assessments or completing assessments in the later stages of placements. Combined with evidence of retrospective assessment, i.e. after completion of the placements, the observed patterns of assessment over the three years are consistent with a box-ticking approach. This study explores the contextual policy and practice dimensions underpinning these and related findings and discusses the implications and recommendations for future arrangements

    Soul as Paraphrase: The Formalism and Minority of Prayer

    Get PDF
    Philosophical and theological treatments of Christian prayer regularly overlook its formal stakes. As a type of limit-speech, prayer can be thought alongside the class of logical dilemmas generated whenever an element of a total set refers to the very totality of which it is a part. These dilemmas are grouped together in what Graham Priest calls the “inclosure schema” and, moreover, exhibit a non-self-identical structure that is also the hallmark of robust metaphysical materialisms (i.e., the structure by which matter constitutively fails to coincide with itself). This dissertation sketches an immanent materialist account of Christian prayer by bringing these two things together: (1) the formal inclosure paradox in which prayer participates and (2) the non-self-identity that characterizes materialist ontologies. The dissertation begins with an Introduction that briefly sketches the gaps in the literature and the challenges facing a materialist account of prayer. Chapter 1, “God and Inclosure,” then introduces Graham Priest’s schematic for limit paradoxes and shows how Anselm and Pseudo-Dionysius’s accounts of prayer fit this schema. Chapter 2, “Form-of-Life in Prayer,” outlines a rather different approach to inclosure represented by Giorgio Agamben. Prayer is here treated as a devotional practice that scales life into an indivisible whole and inhabits the site of time’s failure to coincide with itself. In this way, prayer resists the biopolitical excesses risked by inclosure, answers certain Foucauldian critiques of Christian devotion, and challenges theories of prayer that understand it to be primarily a mental or dialogic practice. Chapter 3, “Prayer as Quantum Chamber,” puts prayer in conversation with François Laruelle’s particle collider—a prepared space in which the world takes on a minimal appearance and registers the effects of the real. On this reading, prayer is like a physicist’s construction of a state vector; it gathers up a disciple’s material occasions in order to present them to a kind of immanent vision. Finally, the project concludes with a brief fourth chapter that articulates a jointly Agambenian and Laruellian reading of the Lord’s Prayer

    Automatic program analysis in a Prolog Intelligent Teaching System

    Get PDF

    ProsocialLearn: D2.3 - 1st system requirements and architecture

    No full text
    This document present the first version of the ProsocialLearn architecture covering the principle definition, the requirement collection, the “business”, “information system”, “technology” architecture as defined in the TOGAF methodology

    A Prescriptive Learning Analytics Framework: Beyond Predictive Modelling and onto Explainable AI with Prescriptive Analytics and ChatGPT

    Full text link
    A significant body of recent research in the field of Learning Analytics has focused on leveraging machine learning approaches for predicting at-risk students in order to initiate timely interventions and thereby elevate retention and completion rates. The overarching feature of the majority of these research studies has been on the science of prediction only. The component of predictive analytics concerned with interpreting the internals of the models and explaining their predictions for individual cases to stakeholders has largely been neglected. Additionally, works that attempt to employ data-driven prescriptive analytics to automatically generate evidence-based remedial advice for at-risk learners are in their infancy. eXplainable AI is a field that has recently emerged providing cutting-edge tools which support transparent predictive analytics and techniques for generating tailored advice for at-risk students. This study proposes a novel framework that unifies both transparent machine learning as well as techniques for enabling prescriptive analytics, while integrating the latest advances in large language models. This work practically demonstrates the proposed framework using predictive models for identifying at-risk learners of programme non-completion. The study then further demonstrates how predictive modelling can be augmented with prescriptive analytics on two case studies in order to generate human-readable prescriptive feedback for those who are at risk using ChatGPT.Comment: revision of the original paper to include ChatGPT integratio

    Measuring Semantic Textual Similarity and Automatic Answer Assessment in Dialogue Based Tutoring Systems

    Get PDF
    This dissertation presents methods and resources proposed to improve onmeasuring semantic textual similarity and their applications in student responseunderstanding in dialogue based Intelligent Tutoring Systems. In order to predict the extent of similarity between given pair of sentences,we have proposed machine learning models using dozens of features, such as thescores calculated using optimal multi-level alignment, vector based compositionalsemantics, and machine translation evaluation methods. Furthermore, we haveproposed models towards adding an interpretation layer on top of similaritymeasurement systems. Our models on predicting and interpreting the semanticsimilarity have been the top performing systems in SemEval (a premier venue for thesemantic evaluation) for the last three years. The correlations between our models\u27predictions and the human judgments were above 0.80 for several datasets while ourmodels being very robust than many other top performing systems. Moreover, wehave proposed Bayesian. We have also proposed a novel Neural Network based word representationmapping approach which allows us to map the vector based representation of a wordfound in one model to the another model where the word representation is missing,effectively pooling together the vocabularies and corresponding representationsacross models. Our experiments show that the model coverage increased by few toseveral times depending on which model\u27s vocabulary is taken as a reference. Also,the transformed representations were well correlated to the native target modelvectors showing that the mapped representations can be used with condence tosubstitute the missing word representations in the target model. models to adapt similarity models across domains. Furthermore, we have proposed methods to improve open-ended answersassessment in dialogue based tutoring systems which is very challenging because ofthe variations in student answers which often are not self contained and need thecontextual information (e.g., dialogue history) in order to better assess theircorrectness. In that, we have proposed Probabilistic Soft Logic (PSL) modelsaugmenting semantic similarity information with other knowledge. To detect intra- and inter-sentential negation scope and focus in tutorialdialogs, we have developed Conditional Random Fields (CRF) models. The resultsindicate that our approach is very effective in detecting negation scope and focus intutorial dialogue context and can be further developed to augment the naturallanguage understanding systems. Additionally, we created resources (datasets, models, and tools) for fosteringresearch in semantic similarity and student response understanding inconversational tutoring systems
    • 

    corecore