11,968 research outputs found

    Segment Anything Model (SAM) for Radiation Oncology

    Full text link
    In this study, we evaluate the performance of the Segment Anything Model (SAM) model in clinical radiotherapy. We collected real clinical cases from four regions at the Mayo Clinic: prostate, lung, gastrointestinal, and head \& neck, which are typical treatment sites in radiation oncology. For each case, we selected the OARs of concern in radiotherapy planning and compared the Dice and Jaccard outcomes between clinical manual delineation, automatic segmentation using SAM's "segment anything" mode, and automatic segmentation using SAM with box prompt. Our results indicate that SAM performs better in automatic segmentation for the prostate and lung regions, while its performance in the gastrointestinal and head \& neck regions was relatively inferior. When considering the size of the organ and the clarity of its boundary, SAM displays better performance for larger organs with clear boundaries, such as the lung and liver, and worse for smaller organs with unclear boundaries, like the parotid and cochlea. These findings align with the generally accepted variations in difficulty level associated with manual delineation of different organs at different sites in clinical radiotherapy. Given that SAM, a single trained model, could handle the delineation of OARs in four regions, these results also demonstrate SAM's robust generalization capabilities in automatic segmentation for radiotherapy, i.e., achieving delineation of different radiotherapy OARs using a generic automatic segmentation model. SAM's generalization capabilities across different regions make it technically feasible to develop a generic model for automatic segmentation in radiotherapy

    Rendezvous at Chesuncook: A Chronicle of Surveyors, Landowners, Loggers, Settlers, & Sports

    Get PDF
    Abstract provided by the author: Rendezvous at Chesuncook, 1827-1902 is the only comprehensive history of Chesuncook Lake and Chesuncook settlement (village) from 1827 through 1902. The text’s two major focal points are people and old photographs. Over 350 biographical sketches include surveyors, landowners, lumbermen, drive bosses, loggers, settlers, and builders of dams and boats. For the 170 plus pictures this book is their only aggregate presentation. The photos communicate a history of what the landscape and settlements once looked like and how they changed over the decades in this book. This book purposely ends December 30, 1902. Through 1902 the loggers were independent and orchestrated their cooperation through their organization the Penobscot Log Driving Company, an entity that reformed every year with those who were logging. The book’s decade-by-decade chapter organization draws attention to their consistent year-to-year remarkable efforts and successes. Within each chapter the content focuses on who surveyed the land, who bought property, who logged, who settled, and who worked with sports; what were their activities; and how and with what did they function. Within this information are the wilderness farms that served the area: Lily Bay, Roach River, Ragged Lake, Ripogenus Lake, Deer Pond, and the head of Chesuncook Lake. The decade-by-decade organization reveals how ways and means of living and logging evolved. For example loggers used horses, but prior to the 1890s the predominant work animal was an ox; why was that? The last chapter, “Remembering the drive bosses,” has pictures of 13 of these 22 men and for everyone a verbal snap shot through which to remember them

    Knowledge Graph Building Blocks: An easy-to-use Framework for developing FAIREr Knowledge Graphs

    Full text link
    Knowledge graphs and ontologies provide promising technical solutions for implementing the FAIR Principles for Findable, Accessible, Interoperable, and Reusable data and metadata. However, they also come with their own challenges. Nine such challenges are discussed and associated with the criterion of cognitive interoperability and specific FAIREr principles (FAIR + Explorability raised) that they fail to meet. We introduce an easy-to-use, open source knowledge graph framework that is based on knowledge graph building blocks (KGBBs). KGBBs are small information modules for knowledge-processing, each based on a specific type of semantic unit. By interrelating several KGBBs, one can specify a KGBB-driven FAIREr knowledge graph. Besides implementing semantic units, the KGBB Framework clearly distinguishes and decouples an internal in-memory data model from data storage, data display, and data access/export models. We argue that this decoupling is essential for solving many problems of knowledge management systems. We discuss the architecture of the KGBB Framework as we envision it, comprising (i) an openly accessible KGBB-Repository for different types of KGBBs, (ii) a KGBB-Engine for managing and operating FAIREr knowledge graphs (including automatic provenance tracking, editing changelog, and versioning of semantic units); (iii) a repository for KGBB-Functions; (iv) a low-code KGBB-Editor with which domain experts can create new KGBBs and specify their own FAIREr knowledge graph without having to think about semantic modelling. We conclude with discussing the nine challenges and how the KGBB Framework provides solutions for the issues they raise. While most of what we discuss here is entirely conceptual, we can point to two prototypes that demonstrate the principle feasibility of using semantic units and KGBBs to manage and structure knowledge graphs

    Leveraging a machine learning based predictive framework to study brain-phenotype relationships

    Get PDF
    An immense collective effort has been put towards the development of methods forquantifying brain activity and structure. In parallel, a similar effort has focused on collecting experimental data, resulting in ever-growing data banks of complex human in vivo neuroimaging data. Machine learning, a broad set of powerful and effective tools for identifying multivariate relationships in high-dimensional problem spaces, has proven to be a promising approach toward better understanding the relationships between the brain and different phenotypes of interest. However, applied machine learning within a predictive framework for the study of neuroimaging data introduces several domain-specific problems and considerations, leaving the overarching question of how to best structure and run experiments ambiguous. In this work, I cover two explicit pieces of this larger question, the relationship between data representation and predictive performance and a case study on issues related to data collected from disparate sites and cohorts. I then present the Brain Predictability toolbox, a soft- ware package to explicitly codify and make more broadly accessible to researchers the recommended steps in performing a predictive experiment, everything from framing a question to reporting results. This unique perspective ultimately offers recommen- dations, explicit analytical strategies, and example applications for using machine learning to study the brain

    Neural Architecture Search: Insights from 1000 Papers

    Full text link
    In the past decade, advances in deep learning have resulted in breakthroughs in a variety of areas, including computer vision, natural language understanding, speech recognition, and reinforcement learning. Specialized, high-performing neural architectures are crucial to the success of deep learning in these areas. Neural architecture search (NAS), the process of automating the design of neural architectures for a given task, is an inevitable next step in automating machine learning and has already outpaced the best human-designed architectures on many tasks. In the past few years, research in NAS has been progressing rapidly, with over 1000 papers released since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized and comprehensive guide to neural architecture search. We give a taxonomy of search spaces, algorithms, and speedup techniques, and we discuss resources such as benchmarks, best practices, other surveys, and open-source libraries

    Peer teachers Taking the Lead in Classroom Instruction: Program Creation and Challenges Faced

    Get PDF
    Purpose – This paper discusses a program to train undergraduate students as near peer teachers delivering course-embedded information literacy instruction to undergraduate students. Design/methodology/approach – The approach involved the development and delivery of a curriculum combining information literacy concepts and teaching pedagogy. Significant student feedback was gathered which determined the final program structure. Findings – While the curriculum was successful in developing students’ information literacy competencies and pedagogical skills, stakeholder buy-in and the COVID-19 pandemic hindered the program. Additionally, the goal of the program - solo student teaching, was not realized. Originality – Peer teaching is widely implemented in many disciplines, however, its application in academic libraries has focused more on peer reference, rather than peer teaching. This case study adds to the body of literature on this topic related to student peer teaching in academic libraries

    Specialized, Localized, Privatized: An Institutional and Historical Analysis of the Emergence of New Graduate Schools of Education

    Get PDF
    Thesis advisor: Marilyn Cochran-SmithThis dissertation presents an institutional and historical analysis of the emergence of new graduate schools of education, or nGSEs. A controversial reform in the field of teacher preparation, nGSEs offer teacher preparation, state certification, and master’s degrees in a variety of new non-university contexts. With bipartisan support and philanthropic backing, the nGSE phenomenon has gained traction quickly. Today, 11 nGSEs, some with several branches, are operating in 16 different states. The dissertation examines the emergence of nGSEs using concepts from sociological neoinstitutionalism through primary document analysis and institutional analysis to answer the following questions: (1) What is the nature of nGSEs as organizations, including their historical features, funding models, and organizational environments? What changes have occurred in these features since the inception of nGSEs? (2) What institutional logic animates nGSEs as organizations? (3) What happens to teacher preparation in market-organized environments? Analysis revealed that nGSEs have diverse organizational origins and that they have largely reconfigured time and place for teacher preparation. As organizations that have moved the bulk of teacher preparation to K-12 schools and/or the internet while evolving rapidly in different environments, nGSEs naturally have different cultural-cognitive schemata. However, market logic is evident in some form, though to varying degrees, at each new organization. nGSEs tend to be private sector solutions to problems in the public education system, and they enjoy the support of education philanthropists who fund alternatives to the public education bureaucracy. I show how nGSEs are fundamentally responses to specialized, and oftentimes regionalized, circumstances that create demand for new kinds of teacher preparation programs. nGSEs are tailored for particular contexts and conditions—some nGSEs serve certain geographical communities while others serve certain kinds of school communities or pedagogical movements. I argue that this has led to the creation of highly specialized niches in the 21st century market for teacher preparation. Though they all constitute one reform, namely the relocation of teacher preparation from universities to new and different kinds of organizations, nGSEs are remarkably different from one another and from the wider field of teacher preparation.Thesis (PhD) — Boston College, 2022.Submitted to: Boston College. Lynch School of Education.Discipline: Teacher Education, Special Education, Curriculum and Instruction

    Special Topics in Information Technology

    Get PDF
    This open access book presents thirteen outstanding doctoral dissertations in Information Technology from the Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy. Information Technology has always been highly interdisciplinary, as many aspects have to be considered in IT systems. The doctoral studies program in IT at Politecnico di Milano emphasizes this interdisciplinary nature, which is becoming more and more important in recent technological advances, in collaborative projects, and in the education of young researchers. Accordingly, the focus of advanced research is on pursuing a rigorous approach to specific research topics starting from a broad background in various areas of Information Technology, especially Computer Science and Engineering, Electronics, Systems and Control, and Telecommunications. Each year, more than 50 PhDs graduate from the program. This book gathers the outcomes of the thirteen best theses defended in 2020-21 and selected for the IT PhD Award. Each of the authors provides a chapter summarizing his/her findings, including an introduction, description of methods, main achievements and future work on the topic. Hence, the book provides a cutting-edge overview of the latest research trends in Information Technology at Politecnico di Milano, presented in an easy-to-read format that will also appeal to non-specialists

    Command and Persuade

    Get PDF
    Why, when we have been largely socialized into good behavior, are there more laws that govern our behavior than ever before? Levels of violent crime have been in a steady decline for centuries—for millennia, even. Over the past five hundred years, homicide rates have decreased a hundred-fold. We live in a time that is more orderly and peaceful than ever before in human history. Why, then, does fear of crime dominate modern politics? Why, when we have been largely socialized into good behavior, are there more laws that govern our behavior than ever before? In Command and Persuade, Peter Baldwin examines the evolution of the state's role in crime and punishment over three thousand years. Baldwin explains that the involvement of the state in law enforcement and crime prevention is relatively recent. In ancient Greece, those struck by lightning were assumed to have been punished by Zeus. In the Hebrew Bible, God was judge, jury, and prosecutor when Cain killed Abel. As the state's power as lawgiver grew, more laws governed behavior than ever before; the sum total of prohibited behavior has grown continuously. At the same time, as family, community, and church exerted their influences, we have become better behaved and more law-abiding. Even as the state stands as the socializer of last resort, it also defines through law the terrain on which we are schooled into acceptable behavior. This title is also available in an Open Access edition
    • …
    corecore