6 research outputs found

    The earliest evidence for Upper Paleolithic occupation in the Armenian Highlands at Aghitu-3 Cave

    Get PDF
    With its well-preserved archaeological and environmental records, Aghitu-3 Cave permits us to examine the settlement patterns of the Upper Paleolithic (UP) people who inhabited the Armenian Highlands. We also test whether settlement of the region between ∼39–24,000 cal BP relates to environmental variability. The earliest evidence occurs in archaeological horizon (AH) VII from ∼39–36,000 cal BP during a mild, moist climatic phase. AH VI shows periodic occupation as warm, humid conditions prevailed from ∼36–32,000 cal BP. As the climate becomes cooler and drier at ∼32– 29,000 cal BP (AH V-IV), evidence for occupation is minimal. However, as cooling continues, the deposits of AH III demonstrate that people used the site more intensively from ∼29–24,000 cal BP, leaving behind numerous stone artifacts, faunal remains, and complex combustion features. Despite the climatic fluctuations seen across this 15,000-year sequence, lithic technology remains attuned to one pattern: unidirectional reduction of small cores geared towards the production of bladelets for tool manufacture. Subsistence patterns also remain stable, focused on medium-sized prey such as ovids and caprids, as well as equids. AH III demonstrates an expansion of social networks to the northwest and southwest, as the transport distance of obsidian used to make stone artifacts increases. We also observe the addition of bone tools, including an eyed needle, and shell beads brought from the east, suggesting that these people manufactured complex clothing and wore ornaments. Remains of micromammals, birds, charcoal, pollen, and tephra relate the story of environmental variability. We hypothesize that UP behavior was linked to shifts in demographic pressures and climatic changes. Thus, by combining archaeological and environmental data, we gain a clearer picture about the first UP inhabitants of the Armenian Highlands

    Multicentric validation of EndoDigest: a computer vision platform for video documentation of the critical view of safety in laparoscopic cholecystectomy

    No full text
    Background: A computer vision (CV) platform named EndoDigest was recently developed to facilitate the use of surgical videos. Specifically, EndoDigest automatically provides short video clips to effectively document the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). The aim of the present study is to validate EndoDigest on a multicentric dataset of LC videos. Methods: LC videos from 4 centers were manually annotated with the time of the cystic duct division and an assessment of CVS criteria. Incomplete recordings, bailout procedures and procedures with an intraoperative cholangiogram were excluded. EndoDigest leveraged predictions of deep learning models for workflow analysis in a rule-based inference system designed to estimate the time of the cystic duct division. Performance was assessed by computing the error in estimating the manually annotated time of the cystic duct division. To provide concise video documentation of CVS, EndoDigest extracted video clips showing the 2 min preceding and the 30 s following the predicted cystic duct division. The relevance of the documentation was evaluated by assessing CVS in automatically extracted 2.5-min-long video clips. Results: 144 of the 174 LC videos from 4 centers were analyzed. EndoDigest located the time of the cystic duct division with a mean error of 124.0 ± 270.6 s despite the use of fluorescent cholangiography in 27 procedures and great variations in surgical workflows across centers. The surgical evaluation found that 108 (75.0%) of the automatically extracted short video clips documented CVS effectively. Conclusions: EndoDigest was robust enough to reliably locate the time of the cystic duct division and efficiently video document CVS despite the highly variable workflows. Training specifically on data from each center could improve results; however, this multicentric validation shows the potential for clinical translation of this surgical data science tool to efficiently document surgical safety

    CholecTriplet2021: A benchmark challenge for surgical action triplet recognition

    No full text
    Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of ‹instrument, verb, target› combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery

    Spectral theory of random self-adjoint operators

    No full text
    corecore