13 research outputs found

    The Role of Informatics in Software Engineering: Literature Reviews, Agenda and Software Informatics

    Get PDF
    Literature reviews have been and are increasingly being merged with semi-automatic versions of bibliometrics and text analytics to extend library capabilities in the direction of performing informatics. Such extended capabilities allow special libraries to move in the direction of supporting and even performing informatic functions for bioinformatics or medical informatics. Recently a new subfield of informatics, software informatics, has been discussed that opens informatic opportunities for software engineering special libraries. The paper discusses a software engineering library that is using informatic techniques to support characterizations of customer demand landscapes that inform software engineering agenda. Sources analyzed go beyond published periodical literature to include organizational reports like budget justifications and, potentially, use of web harvesting. It is proposed that this use of informatic techniques is part of software informatics and because of the potential impact on how software is developed and used, may be part of software engineering as well

    A Bibliography of the Personal Software Process (PSP) and the Team Software Process (TSP)

    No full text
    Since the early 1990s, widespread use of the Personal Software Process (PSP) and Team Software Process (TSP) has resulted in a substantial body of literature about these methodologies and the experiences of organizations that have used them. This special report provides a bibliography of books, articles, and other literature concerning the PSP and TSP methodologies

    Becoming a Science Librarian: Accident, Serendipity, or Purposeful Plan?

    No full text
    Increasing concern has been expressed in the literature regarding the recruitment and retention of qualified librarians within the profession. Science and Technology Libraries share equally in considering the consequences of this trend. Two Science Librarians, neither possessing a degree in the sciences, will discuss the skills, competencies, and experiences that enable them to thrive in a challenging and dynamic work environment. Descriptive statistics from a survey of other newly hired science librarians, regardless of their science background, will also be incorporated. In addition to exploring perceived strengths, this paper will address the possible disadvantages that the lack of a science “background” may present. Science background will be discussed in terms of having previously obtained a degree in the sciences. The approach to the topic is from the perspective of the new hire (not necessarily a “new” librarian), rather than that of the hiring institution; however, strategies and methods that are useful to both groups will be offered.</p

    Your Library Instruction is in Another Castle: Developing Information Literacy Based Video Games at Carnegie Mellon University

    No full text
    Being part of an institution possessing a world-renowned computer science school and a reputation for developing innovative new technologies, the University Libraries at Carnegie Mellon were motivated to explore a new method of information literacy instruction. This method was to be the creation of a web-based video game. Through a $50,000 grant from the Buhl Foundation, awarded in the Spring of 2006, the University Libraries began developing a series of “web-based instructional modules." [1] The University Libraries soon formed a representative group of three librarians, self-dubbed the Library Arcade (LA) Committee, to help define how to best transmute the goals of traditional "information literacy" instruction into a video game format. The committee began this process by investigating the past and current trends in video game culture

    Incidental Findings in the Trauma Population: Interdisciplinary Approach and Electronic Medical Record Reminder Association with Pre-Discharge Reporting and Medicolegal Risk

    No full text
    Background: Incidental findings (IFs) are reported in 20% or more of trauma CT scans. In addition to the importance of patient disclosure, there is considerable legal pressure to avoid missed diagnoses. We reported previously that 63.5% of IFs were disclosed before discharge and with 20% were nondisclosed. We initiated a multidisciplinary systemic plan to effect predischarge disclosure by synoptic CT reports with American College of Radiology recommended follow-up, electronic medical records discharge prompts, and provider education. Study Design: Prospective observational series patients from November 2019 to February 2020 were included. Statistical analysis was performed with SPSS, version 21 (IBM Corp). Results: Eight hundred and seventy-seven patients underwent 1 or more CT scans for the evaluation of trauma (507 were male and 370 were female). Mean age of the patients was 57 years (range 14 to 99 years) and 96% had blunt injury. In 315 patients, there were 523 IFs (1.7 per patient); the most common were lung (17.5%), kidney (13%), and liver (11%). Radiology report compliance rate was 84% (210 of 249 patients). There were 66 studies from outside facilities. Sixteen IFs were suspicious for malignancy. A total of 151 patients needed no follow-up and 148 patients needed future follow-up evaluation. Predischarge IF disclosure compliance rate was 90.1% (286 patients); 25 were post discharge. Four patients remained undisclosed. Compared with our previous report, clearer reporting and electronic medical records prompts increased predischarge disclosure from 63.5% to 90.1% (p \u3c 0.01, chi-square test) and decreased days to notification from 29.5 (range 0 to 277) to 5.2 (range 0 to 59) (p \u3c 0.01, Mann-Whitney U test). Conclusions: Timely, complete disclosure of IFs improves patient outcomes and reduces medicolegal risk. Collaboration among trauma, radiology, and information technology promotes improved disclosure in trauma populations

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting
    corecore