6,959 research outputs found

    Algoritmilise mÔtlemise oskuste hindamise mudel

    Get PDF
    VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneTehnoloogia on kĂ”ikjal meie ĂŒmber ja arvutiteadus pole enam ainult eraldi distsipliin teadlastele, vaid omab aina laiemat rolli ka teistel aladel. Huvi algoritmilise mĂ”tlemise arendamise vastu kasvab kĂ”igil haridustasemetel alates eelkoolist lĂ”petades ĂŒlikooliga. Sellega seoses vajame aina enam ĂŒldhariduskoolide tasemel uuringuid, et omada paremat ĂŒlevaadet algoritmilise mĂ”tlemise oskustest, et luua praktiline mudel algoritmilise mĂ”tlemise hindamiseks. Algoritmilist mĂ”tlemist kirjeldatakse paljudes artiklites, kuid sageli pole need omavahel kooskĂ”las ja puudub ĂŒhine arusaamine algoritmilise mĂ”tlemise oskuste dimensioonidest. Doktoritöö sisaldab sĂŒstemaatilist kirjanduse analĂŒĂŒsi, kus mĂ”jukamate artiklite sĂŒnteesimisel jĂ”utakse kolmeetapilise algoritmilise mĂ”tlemise oskuste mudelini. See mudel koosneb jĂ€rgnevatest etappidest: i) probleemi defineerimine, ii) probleemi lahendamine ja iii) lahenduse analĂŒĂŒsimine. Need kolm etappi sisaldavad kĂŒmmet algoritmilise mĂ”tlemise alamoskust: probleemi formuleerimine, abstrahheerimine, reformuleerimine, osadeks vĂ”tmine, andmete kogumine ja analĂŒĂŒs, algoritmiline disain, paralleliseerimine ja itereerimine, automatiseerimine, ĂŒldistamine ning tulemuse hindamine. Selleks, et algoritmilist mĂ”tlemist sĂŒstemaatiliselt arendada, on vaja mÔÔtevahendit vastavate oskuste mÔÔtmiseks pĂ”hikoolis. Doktoritöö uurib informaatikaviktoriini Kobrase ĂŒlesannete abil, milliseid algoritmilise mĂ”tlemise osaoskusi on vĂ”imalik eraldada Kobrase viktoriini tulemustest lĂ€htuvalt ilmnes kaks algoritmilise mĂ”tlemise oskust: algoritmiline disain ja mustrite Ă€ratundmine. Lisaks pĂ”hikoolile kasutati ĂŒlesandeid ka gĂŒmnaasiumis millga kinnitati, et kohendatud kujul saab neid ĂŒlesandeid kasutada algoritmilise mĂ”tlemise oskuste hindamiseks ka gĂŒmnaasiumisgĂŒmnaasiumitasemel. Viimase asjana pakutakse doktoritöös vĂ€lja teoreetilisi ja empiirilisi tulemusi kokkuvĂ”ttev algoritmilise mĂ”tlemise oskusi hindav mudel.In the modernizing world, computer science is not only a separate discipline for scientists but has an essential role in many fields. There is an increasing interest in developing computational thinking (CT) skills at various education levels – from kindergarten to university. Therefore, at the comprehensive school level, research is needed to have an understanding of the dimensions of CT skills and to develop a model for assessing CT skills. CT is described in several articles, but these are not in line with each other, and there is missing a common understanding of the dimensions of the skills that should be in the focus while developing and assessing CT skills. In this doctoral study, through a systematic literature review, an overview of the dimensions of CT presented in scientific papers is given. A model for assessing CT skills in three stages is proposed: i) defining the problem, ii) solving the problem, and iii) analyzing the solution. Those three stages consist of ten CT skills: problem formulation, abstraction, problem reformulation, decomposition, data collection and analysis, algorithmic design, parallelization and iteration, automation, generalization, and evaluation. The systematic development of CT skills needs an instrument for assessing CT skills at the basic school level. This doctoral study describes CT skills that can be distinguished from the Bebras (Kobras) international challenge results. Results show that wto CT skills emerged that can be characterized as algorithmic thinking and pattern recognition. These Bebras tasks were also modified to be used for setting directions for developing CT skills at the secondary school level. Eventually, a modified model for assessing CT skills is presented, combining the theoretical and empirical results from the three main studies.https://www.ester.ee/record=b543136

    Structured computer-based training in the interpretation of neuroradiological images

    Get PDF
    Computer-based systems may be able to address a recognised need throughout the medical profession for a more structured approach to training. We describe a combined training system for neuroradiology, the MR Tutor that differs from previous approaches to computer-assisted training in radiology in that it provides case-based tuition whereby the system and user communicate in terms of a well-founded Image Description Language. The system implements a novel method of visualisation and interaction with a library of fully described cases utilising statistical models of similarity, typicality and disease categorisation of cases. We describe the rationale, knowledge representation and design of the system, and provide a formative evaluation of its usability and effectiveness

    Reasoning before Comparison: LLM-Enhanced Semantic Similarity Metrics for Domain Specialized Text Analysis

    Full text link
    In this study, we leverage LLM to enhance the semantic analysis and develop similarity metrics for texts, addressing the limitations of traditional unsupervised NLP metrics like ROUGE and BLEU. We develop a framework where LLMs such as GPT-4 are employed for zero-shot text identification and label generation for radiology reports, where the labels are then used as measurements for text similarity. By testing the proposed framework on the MIMIC data, we find that GPT-4 generated labels can significantly improve the semantic similarity assessment, with scores more closely aligned with clinical ground truth than traditional NLP metrics. Our work demonstrates the possibility of conducting semantic analysis of the text data using semi-quantitative reasoning results by the LLMs for highly specialized domains. While the framework is implemented for radiology report similarity analysis, its concept can be extended to other specialized domains as well

    Emergent Frameworks for Decision Support Systems

    Get PDF
    Knowledge is generated and accessed from heterogeneous spaces. The recent advances in in-formation technologies provide enhanced tools for improving the efficiency of knowledge-based decision support systems. The purpose of this paper is to present the frameworks for developing the optimal blend of technologies required in order to better the knowledge acquisition and reuse in large scale decision making environments. The authors present a case study in the field of clinical decision support systems based on emerging technologies. They consider the changes generated by the upraising social technologies and the challenges brought by the interactive knowledge building within vast online communities.Knowledge Acquisition, CDDSS, 2D Barcodes, Mobile Interface

    Nanoinformatics knowledge infrastructures: bringing efficient information management to nanomedical research

    Get PDF
    Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts

    Comparison of Required Competences and Task Material in Modeling Education

    Get PDF
    The reform of the European academic landscape with the introduction of bachelor\u27s and master\u27s degree programs has brought about several profound changes for teaching and assessment in higher education. With regard to the examination system, the shift towards output-oriented teaching is still one of the most significant challenges. Assessments have to be integrated into the teaching and learning arrangements and consistently aligned towards the intended learning outcomes. In particular, assessments should provide valid evidence that learners have acquired competences that are relevant for a specific domain. However, it seems that this didactic goal has not yet been fully achieved in modeling education in computer science. The aim of this study is to investigate whether typical task material used in exercises and exams in modeling education at selected German universities covers relevant competences required for graphical modeling. For this purpose, typical tasks in the field of modeling are first identified by means of a content-analytical procedure. Subsequently, it is determined which competence facets relevant for graphical modeling are addressed by the task types. By contrasting a competence model for modeling with the competences addressed by the tasks, a gap was identified between the required competences and the task material analyzed. In particular, the gap analysis shows the neglect of transversal competence facets as well as those related to the analysis and evaluation of models. The result of this paper is a classification of task types for modeling education and a specification of the competence facets addressed by these tasks. Recommendations for developing and assessing student\u27s competences comprehensively are given

    Cognitive Activity Support Tools: Design of the Visual Interface

    Get PDF
    This dissertation is broadly concerned with interactive computational tools that support the performance of complex cognitive activities, examples of which are analytical reasoning, decision making, problem solving, sense making, forecasting, and learning. Examples of tools that support such activities are visualization-based tools in the areas of: education, information visualization, personal information management, statistics, and health informatics. Such tools enable access to information and data and, through interaction, enable a human-information discourse. In a more specific sense, this dissertation is concerned with the design of the visual interface of these tools. This dissertation presents a large and comprehensive theoretical framework to support research and design. Issues treated herein include interaction design and patterns of interaction for cognitive and epistemic support; analysis of the essential properties of interactive visual representations and their influences on cognitive and perceptual processes; an analysis of the structural components of interaction and how different operational forms of interaction components affect the performance of cognitive activities; an examination of how the information-processing load should be distributed between humans and tools during the performance of complex cognitive activities; and a categorization of common visualizations according to their structure and function, and a discussion of the cognitive utility of each category. This dissertation also includes a chapter that describes the design of a cognitive activity support tool, as guided by the theoretical contributions that comprise the rest of the dissertation. Those that may find this dissertation useful include researchers and practitioners in the areas of data and information visualization, visual analytics, medical and health informatics, data science, journalism, educational technology, and digital games

    Closing the loop: assisting archival appraisal and information retrieval in one sweep

    Get PDF
    In this article, we examine the similarities between the concept of appraisal, a process that takes place within the archives, and the concept of relevance judgement, a process fundamental to the evaluation of information retrieval systems. More specifically, we revisit selection criteria proposed as result of archival research, and work within the digital curation communities, and, compare them to relevance criteria as discussed within information retrieval's literature based discovery. We illustrate how closely these criteria relate to each other and discuss how understanding the relationships between the these disciplines could form a basis for proposing automated selection for archival processes and initiating multi-objective learning with respect to information retrieval

    Characterizing the Information Needs of Rural Healthcare Practitioners with Language Agnostic Automated Text Analysis

    Get PDF
    Objectives – Previous research has characterized urban healthcare providers\u27 information needs, using various qualitative methods. However, little is known about the needs of rural primary care practitioners in Brazil. Communication exchanged during tele-consultations presents a unique data source for the study of these information needs. In this study, I characterize rural healthcare providers\u27 information needs expressed electronically, using automated methods. Methods – I applied automated methods to categorize messages obtained from the telehealth system from two regions in Brazil. A subset of these messages, annotated with top-level categories in the DeCS terminology (the regional equivalent of MeSH), was used to train text categorization models, which were then applied to a larger, unannotated data set. On account of their more granular nature, I focused on answers provided to the queries sent by rural healthcare providers. I studied these answers, as surrogates for the information needs they met. Message representations were generated using methods of distributional semantics, permitting the application of k-Nearest Neighbor classification for category assignment. The resulting category assignments were analyzed to determine differences across regions, and healthcare providers. Results – Analysis of the assigned categories revealed differences in information needs across regions, corresponding to known differences in the distributions of diseases and tele-consultant expertise across these regions. Furthermore, information needs of rural nurses were observed to be different from those documented in qualitative studies of their urban counterparts, and the distribution of expressed information needs categories differed across types of providers (e.g. nurses vs. physicians). Discussion – The automated analysis of large amounts of digitally-captured tele-consultation data suggests that rural healthcare providers\u27 information needs in Brazil are different than those of their urban counterparts in developed countries. The observed disparities in information needs correspond to known differences in the distribution of illness and expertise in these regions, supporting the applicability of my methods in this context. In addition, these methods have the potential to mediate near real-time monitoring of information needs, without imposing a direct burden upon healthcare providers. Potential applications include automated delivery of needed information at the point of care, needs-based deployment of tele-consultation resources and syndromic surveillance. Conclusion – I used automated text categorization methods to assess the information needs expressed at the point of care in rural Brazil. My findings reveal differences in information needs across regions, and across practitioner types, demonstrating the utility of these methods and data as a means to characterize information needs

    Understanding Patient Safety Reports via Multi-label Text Classification and Semantic Representation

    Get PDF
    Medical errors are the results of problems in health care delivery. One of the key steps to eliminate errors and improve patient safety is through patient safety event reporting. A patient safety report may record a number of critical factors that are involved in the health care when incidents, near misses, and unsafe conditions occur. Therefore, clinicians and risk management can generate actionable knowledge by harnessing useful information from reports. To date, efforts have been made to establish a nationwide reporting and error analysis mechanism. The increasing volume of reports has been driving improvement in quantity measures of patient safety. For example, statistical distributions of errors across types of error and health care settings have been well documented. Nevertheless, a shift to quality measure is highly demanded. In a health care system, errors are likely to occur if one or more components (e.g., procedures, equipment, etc.) that are intrinsically associated go wrong. However, our understanding of what and how these components are connected is limited for at least two reasons. Firstly, the patient safety reports present difficulties in aggregate analysis since they are large in volume and complicated in semantic representation. Secondly, an efficient and clinically valuable mechanism to identify and categorize these components is absent. I strive to make my contribution by investigating the multi-labeled nature of patient safety reports. To facilitate clinical implementation, I propose that machine learning and semantic information of reports, e.g., semantic similarity between terms, can be used to jointly perform automated multi-label classification. My work is divided into three specific aims. In the first aim, I developed a patient safety ontology to enhance semantic representation of patient safety reports. The ontology supports a number of applications including automated text classification. In the second aim, I evaluated multilabel text classification algorithms on patient safety reports. The results demonstrated a list of productive algorithms with balanced predictive power and efficiency. In the third aim, to improve the performance of text classification, I developed a framework for incorporating semantic similarity and kernel-based multi-label text classification. Semantic similarity values produced by different semantic representation models are evaluated in the classification tasks. Both ontology-based and distributional semantic similarity exerted positive influence on classification performance but the latter one shown significant efficiency in terms of the measure of semantic similarity. Our work provides insights into the nature of patient safety reports, that is a report can be labeled by multiple components (e.g., different procedures, settings, error types, and contributing factors) it contains. Multi-labeled reports hold promise to disclose system vulnerabilities since they provide the insight of the intrinsically correlated components of health care systems. I demonstrated the effectiveness and efficiency of the use of automated multi-label text classification embedded with semantic similarity information on patient safety reports. The proposed solution holds potential to incorporate with existing reporting systems, significantly reducing the workload of aggregate report analysis
    • 

    corecore