15,251 research outputs found

    Optical tomography: Image improvement using mixed projection of parallel and fan beam modes

    Get PDF
    Mixed parallel and fan beam projection is a technique used to increase the quality images. This research focuses on enhancing the image quality in optical tomography. Image quality can be deļ¬ned by measuring the Peak Signal to Noise Ratio (PSNR) and Normalized Mean Square Error (NMSE) parameters. The ļ¬ndings of this research prove that by combining parallel and fan beam projection, the image quality can be increased by more than 10%in terms of its PSNR value and more than 100% in terms of its NMSE value compared to a single parallel beam

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    Impact of public release of performance data on the behaviour of healthcare consumers and providers.

    Get PDF
    BACKGROUND: It is becoming increasingly common to publish information about the quality and performance of healthcare organisations and individual professionals. However, we do not know how this information is used, or the extent to which such reporting leads to quality improvement by changing the behaviour of healthcare consumers, providers, and purchasers. OBJECTIVES: To estimate the effects of public release of performance data, from any source, on changing the healthcare utilisation behaviour of healthcare consumers, providers (professionals and organisations), and purchasers of care. In addition, we sought to estimate the effects on healthcare provider performance, patient outcomes, and staff morale. SEARCH METHODS: We searched CENTRAL, MEDLINE, Embase, and two trials registers on 26 June 2017. We checked reference lists of all included studies to identify additional studies. SELECTION CRITERIA: We searched for randomised or non-randomised trials, interrupted time series, and controlled before-after studies of the effects of publicly releasing data regarding any aspect of the performance of healthcare organisations or professionals. Each study had to report at least one main outcome related to selecting or changing care. DATA COLLECTION AND ANALYSIS: Two review authors independently screened studies for eligibility and extracted data. For each study, we extracted data about the target groups (healthcare consumers, healthcare providers, and healthcare purchasers), performance data, main outcomes (choice of healthcare provider, and improvement by means of changes in care), and other outcomes (awareness, attitude, knowledge of performance data, and costs). Given the substantial degree of clinical and methodological heterogeneity between the studies, we presented the findings for each policy in a structured format, but did not undertake a meta-analysis. MAIN RESULTS: We included 12 studies that analysed data from more than 7570 providers (e.g. professionals and organisations), and a further 3,333,386 clinical encounters (e.g. patient referrals, prescriptions). We included four cluster-randomised trials, one cluster-non-randomised trial, six interrupted time series studies, and one controlled before-after study. Eight studies were undertaken in the USA, and one each in Canada, Korea, China, and The Netherlands. Four studies examined the effect of public release of performance data on consumer healthcare choices, and four on improving quality.There was low-certainty evidence that public release of performance data may make little or no difference to long-term healthcare utilisation by healthcare consumers (3 studies; 18,294 insurance plan beneficiaries), or providers (4 studies; 3,000,000 births, and 67 healthcare providers), or to provider performance (1 study; 82 providers). However, there was also low-certainty evidence to suggest that public release of performance data may slightly improve some patient outcomes (5 studies, 315,092 hospitalisations, and 7502 providers). There was low-certainty evidence from a single study to suggest that public release of performance data may have differential effects on disadvantaged populations. There was no evidence about effects on healthcare utilisation decisions by purchasers, or adverse effects. AUTHORS\u27 CONCLUSIONS: The existing evidence base is inadequate to directly inform policy and practice. Further studies should consider whether public release of performance data can improve patient outcomes, as well as healthcare processes

    The Evolution of Wikipedia's Norm Network

    Full text link
    Social norms have traditionally been difficult to quantify. In any particular society, their sheer number and complex interdependencies often limit a system-level analysis. One exception is that of the network of norms that sustain the online Wikipedia community. We study the fifteen-year evolution of this network using the interconnected set of pages that establish, describe, and interpret the community's norms. Despite Wikipedia's reputation for \textit{ad hoc} governance, we find that its normative evolution is highly conservative. The earliest users create norms that both dominate the network and persist over time. These core norms govern both content and interpersonal interactions using abstract principles such as neutrality, verifiability, and assume good faith. As the network grows, norm neighborhoods decouple topologically from each other, while increasing in semantic coherence. Taken together, these results suggest that the evolution of Wikipedia's norm network is akin to bureaucratic systems that predate the information age.Comment: 22 pages, 9 figures. Matches published version. Data available at http://bit.ly/wiki_nor

    Distributional Semantic Models for Clinical Text Applied to Health Record Summarization

    Get PDF
    As information systems in the health sector are becoming increasingly computerized, large amounts of care-related information are being stored electronically. In hospitals clinicians continuously document treatment and care given to patients in electronic health record (EHR) systems. Much of the information being documented is in the form of clinical notes, or narratives, containing primarily unstructured free-text information. For each care episode, clinical notes are written on a regular basis, ending with a discharge summary that basically summarizes the care episode. Although EHR systems are helpful for storing and managing such information, there is an unrealized potential in utilizing this information for smarter care assistance, as well as for secondary purposes such as research and education. Advances in clinical language processing are enabling computers to assist clinicians in their interaction with the free-text information documented in EHR systems. This includes assisting in tasks like query-based search, terminology development, knowledge extraction, translation, and summarization. This thesis explores various computerized approaches and methods aimed at enabling automated semantic textual similarity assessment and information extraction based on the free-text information in EHR systems. The focus is placed on the task of (semi-)automated summarization of the clinical notes written during individual care episodes. The overall theme of the presented work is to utilize resource-light approaches and methods, circumventing the need to manually develop knowledge resources or training data. Thus, to enable computational semantic textual similarity assessment, word distribution statistics are derived from large training corpora of clinical free text and stored as vector-based representations referred to as distributional semantic models. Also resource-light methods are explored in the task of performing automatic summarization of clinical freetext information, relying on semantic textual similarity assessment. Novel and experimental methods are presented and evaluated that focus on: a) distributional semantic models trained in an unsupervised manner from statistical information derived from large unannotated clinical free-text corpora; b) representing and computing semantic similarities between linguistic items of different granularity, primarily words, sentences and clinical notes; and c) summarizing clinical free-text information from individual care episodes. Results are evaluated against gold standards that reļ¬‚ect human judgements. The results indicate that the use of distributional semantics is promising as a resource-light approach to automated capturing of semantic textual similarity relations from unannotated clinical text corpora. Here it is important that the semantics correlate with the clinical terminology, and with various semantic similarity assessment tasks. Improvements over classical approaches are achieved when the underlying vector-based representations allow for a broader range of semantic features to be captured and represented. These are either distributed over multiple semantic models trained with different features and training corpora, or use models that store multiple sense-vectors per word. Further, the use of structured meta-level information accompanying care episodes is explored as training features for distributional semantic models, with the aim of capturing semantic relations suitable for care episode-level information retrieval. Results indicate that such models performs well in clinical information retrieval. It is shown that a method called Random Indexing can be modiļ¬ed to construct distributional semantic models that capture multiple sense-vectors for each word in the training corpus. This is done in a way that retains the original training properties of the Random Indexing method, by being incremental, scalable and distributional. Distributional semantic models trained with a framework called Word2vec, which relies on the use of neural networks, outperform those trained using the classic Random Indexing method in several semantic similarity assessment tasks, when training is done using comparable parameters and the same training corpora. Finally, several statistical features in clinical text are explored in terms of their ability to indicate sentence signiļ¬cance in a text summary generated from the clinical notes. This includes the use of distributional semantics to enable case-based similarity assessment, where cases are other care episodes and their ā€œsolutionsā€, i.e., discharge summaries. A type of manual evaluation is performed, where human experts rates the different aspects of the summaries using a evaluation scheme/tool. In addition, the original clinician-written discharge summaries are explored as gold standard for the purpose of automated evaluation. Evaluation shows a high correlation between manual and automated evaluation, suggesting that such a gold standard can function as a proxy for human evaluations. --- This thesis has been published jointly with Norwegian University of Science and Technology, Norway and University of Turku, Finland.This thesis has beenpublished jointly with Norwegian University of Science and Technology, Norway.Siirretty Doriast
    • ā€¦
    corecore