13 research outputs found

    A systematic literature review of skyline query processing over data stream

    Get PDF
    Recently, skyline query processing over data stream has gained a lot of attention especially from the database community owing to its own unique challenges. Skyline queries aims at pruning a search space of a potential large multi-dimensional set of objects by keeping only those objects that are not worse than any other. Although an abundance of skyline query processing techniques have been proposed, there is a lack of a Systematic Literature Review (SLR) on current research works pertinent to skyline query processing over data stream. In regard to this, this paper provides a comparative study on the state-of-the-art approaches over the period between 2000 and 2022 with the main aim to help readers understand the key issues which are essential to consider in relation to processing skyline queries over streaming data. Seven digital databases were reviewed in accordance with the Preferred Reporting Items for Systematic Reviews (PRISMA) procedures. After applying both the inclusion and exclusion criteria, 23 primary papers were further examined. The results show that the identified skyline approaches are driven by the need to expedite the skyline query processing mainly due to the fact that data streams are time varying (time sensitive), continuous, real time, volatile, and unrepeatable. Although, these skyline approaches are tailored made for data stream with a common aim, their solutions vary to suit with the various aspects being considered, which include the type of skyline query, type of streaming data, type of sliding window, query processing technique, indexing technique as well as the data stream environment employed. In this paper, a comprehensive taxonomy is developed along with the key aspects of each reported approach, while several open issues and challenges related to the topic being reviewed are highlighted as recommendation for future research direction

    Am I safe? An interpretative phenomenological analysis of vulnerability as experienced by patients with complications following surgery

    No full text
    Abdominal surgery carries with it risks of complications. Little is known about patients' experiences of post-surgical deterioration. There is a real need to understand the psychosocial as well as the biological aspects of deterioration in order to improve care and outcomes for patients. Drawing on in-depth interviews with seven abdominal surgery survivors, we present an idiographic account of participants' experiences, situating their contribution to safety within their personal lived experiences and meaning-making of these episodes of deterioration. Our analysis reveals an overarching group experiential theme of vulnerability in relation to participants' experiences of complications after abdominal surgery. This encapsulates the uncertainty of the situation all the participants found themselves in, and the nature and seriousness of their health conditions. The extent of participants' vulnerability is revealed by detailing how they made sense of their experience, how they negotiated feelings of (un)safety drawing on their relationships with family and staff and the legacy of feelings they were left with when their expectations of care (care as imagined) did not meet the reality of their experiences (care as received). The participants' experiences highlight the power imbalance between patients and professionals in terms of whose knowledge counts within the hospital context. The study reveals the potential for epistemic injustice to arise when patients' concerns are ignored or dismissed. Our data has implications for designing strategies to enable escalation of care, both in terms of supporting staff to deliver compassionate care, and in strengthening patient and family involvement in rescue processes. </p

    Intraoperative Applications of Artificial Intelligence in Robotic Surgery: A Scoping Review of Current Development Stages and Levels of Autonomy

    No full text
    OBJECTIVE: A scoping review of the literature was conducted to identify intraoperative AI applications for robotic surgery under development and categorise them by 1) purpose of the applications, 2) level of autonomy, 3) stage of development, and 4) type of measured outcome. BACKGROUND: In robotic surgery, artificial intelligence (AI) based applications have the potential to disrupt a field so far based on a master-slave paradigm. However, there is no available overview about this technology’s current stage of development and level of autonomy. METHODS: MEDLINE and EMBASE were searched between January 1st 2010 and May 21st 2022. Abstract screening, full text review and data extraction were performed independently by two reviewers. Level of autonomy was defined according to the Yang et al classification and stage of development according to the IDEAL framework. RESULTS: 129 studies were included in the review. 97 studies (75%) described applications providing Robot Assistance (autonomy level 1), 30 studies (23%) application enabling Task Autonomy (autonomy level 2), and two studies (2%) application achieving Conditional autonomy (autonomy level 3). All studies were at IDEAL stage 0 and no clinical investigations on humans were found. 116 (90%) conducted in silico or ex-vivo experiments on inorganic material, 9 (7%) ex-vivo experiments on organic material, and 4 (3%) performed in vivo experiments in porcine models. CONCLUSION: Clinical evaluation of intraoperative AI applications for robotic surgery is still in its infancy and most applications have a low level of autonomy. With increasing levels of autonomy, the evaluation focus seems to shift from AI-specific metrics to process outcomes, although common standards are needed to allow comparison between systems

    Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI

    Get PDF
    A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico evaluation, but few have yet demonstrated real benefit to patient care. Early-stage clinical evaluation is important to assess an AI system's actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use and pave the way to further large-scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multi-stakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two-round, modified Delphi process to collect and analyze expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 pre-defined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. In total, 123 experts participated in the first round of Delphi, 138 in the second round, 16 in the consensus meeting and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI-specific reporting items (made of 28 subitems) and ten generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we developed a guideline comprising key items that should be reported in early-stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings

    Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI

    No full text
    A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico evaluation, but few have yet demonstrated real benefit to patient care. Early-stage clinical evaluation is important to assess an AI system's actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use and pave the way to further large-scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multi-stakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two-round, modified Delphi process to collect and analyze expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 pre-defined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. In total, 123 experts participated in the first round of Delphi, 138 in the second round, 16 in the consensus meeting and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI-specific reporting items (made of 28 subitems) and ten generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we developed a guideline comprising key items that should be reported in early-stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings

    Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI.

    No full text
    A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico evaluation, but few have yet demonstrated real benefit to patient care. Early-stage clinical evaluation is important to assess an AI system's actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use and pave the way to further large-scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multi-stakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two-round, modified Delphi process to collect and analyze expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 pre-defined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. In total, 123 experts participated in the first round of Delphi, 138 in the second round, 16 in the consensus meeting and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI-specific reporting items (made of 28 subitems) and ten generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we developed a guideline comprising key items that should be reported in early-stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings
    corecore