1,501 research outputs found
The Art of collaborative storytelling: arts-based representations of narrative contexts”
Draft for: ISA Research Committee on Biography and Society.
The author analyses several theories about science and arts converging in a new point of view. Also talks about the functions of storytelling.
He starts his work with these phrases:
'Art and science have a common thread - both are fuelled by creativity. Whether writing a paper based on my data or filling a canvas with paint, both processes tell a story' (Taylor 2001)
'Science and art are complementary expressions of the same collective subconscious of society' (Morton 1997:1
Preserving the knowledge of long clinical texts using aggregated ensembles of large language models
Clinical texts, such as admission notes, discharge summaries, and progress
notes, contain rich and valuable information that can be used for various
clinical outcome prediction tasks. However, applying large language models,
such as BERT-based models, to clinical texts poses two major challenges: the
limitation of input length and the diversity of data sources. This paper
proposes a novel method to preserve the knowledge of long clinical texts using
aggregated ensembles of large language models. Unlike previous studies which
use model ensembling or text aggregation methods separately, we combine
ensemble learning with text aggregation and train multiple large language
models on two clinical outcome tasks: mortality prediction and length of stay
prediction. We show that our method can achieve better results than baselines,
ensembling, and aggregation individually, and can improve the performance of
large language models while handling long inputs and diverse datasets. We
conduct extensive experiments on the admission notes from the MIMIC-III
clinical database by combining multiple unstructured and high-dimensional
datasets, demonstrating our method's effectiveness and superiority over
existing approaches. We also provide a comprehensive analysis and discussion of
our results, highlighting our method's applications and limitations for future
research in the domain of clinical healthcare. The results and analysis of this
study is supportive of our method assisting in clinical healthcare systems by
enabling clinical decision-making with robust performance overcoming the
challenges of long text inputs and varied datasets.Comment: 17 pages, 4 figures, 4 tables, 9 equations and 1 algorith
Big Data for Qualitative Research
Big Data for Qualitative Research covers everything small data researchers need to know about big data, from the potentials of big data analytics to its methodological and ethical challenges. The data that we generate in everyday life is now digitally mediated, stored, and analyzed by web sites, companies, institutions, and governments. Big data is large volume, rapidly generated, digitally encoded information that is often related to other networked data, and can provide valuable evidence for study of phenomena. This book explores the potentials of qualitative methods and analysis for big data, including text mining, sentiment analysis, information and data visualization, netnography, follow-the-thing methods, mobile research methods, multimodal analysis, and rhythmanalysis. It debates new concerns about ethics, privacy, and dataveillance for big data qualitative researchers. This book is essential reading for those who do qualitative and mixed methods research, and are curious, excited, or even skeptical about big data and what it means for future research. Now is the time for researchers to understand, debate, and envisage the new possibilities and challenges of the rapidly developing and dynamic field of big data from the vantage point of the qualitative researcher
Big Data for Qualitative Research
Big Data for Qualitative Research covers everything small data researchers need to know about big data, from the potentials of big data analytics to its methodological and ethical challenges. The data that we generate in everyday life is now digitally mediated, stored, and analyzed by web sites, companies, institutions, and governments. Big data is large volume, rapidly generated, digitally encoded information that is often related to other networked data, and can provide valuable evidence for study of phenomena. This book explores the potentials of qualitative methods and analysis for big data, including text mining, sentiment analysis, information and data visualization, netnography, follow-the-thing methods, mobile research methods, multimodal analysis, and rhythmanalysis. It debates new concerns about ethics, privacy, and dataveillance for big data qualitative researchers. This book is essential reading for those who do qualitative and mixed methods research, and are curious, excited, or even skeptical about big data and what it means for future research. Now is the time for researchers to understand, debate, and envisage the new possibilities and challenges of the rapidly developing and dynamic field of big data from the vantage point of the qualitative researcher
The Role of Mediators in Transforming and Translating Information Quality: A Case of Quality Assurance in a Norwegian Hospital Trust
The existing literature on information quality (IQ) provides limited understanding of how roles influence IQ in healthcare. The traditional way of understanding roles such as collectors, custodians, and consumers assumes that data are simply transformed into information and subsequently used by consumers. However, this does not explain how interpersonal communication influences IQ. In reality, the actors involved can actively change the quality of healthcare information through transformation, translation, or distortion. Latour’s idea of intermediaries and mediators can be an appropriate lens for understanding these roles. Latour defined intermediaries as socio-technical actors who simply transport information, whereas mediators can transform, translate, distort, and change the meaning of information. Following Latour’s idea, we conducted a qualitative case study of quality assurance in a Norwegian healthcare organization. In doing so, we illustrated how IQ mediators can distort or create shared understanding of quality assurance information, which further influences enactment
Machine Learning and Clinical Text. Supporting Health Information Flow
Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-effective hospital administration. Methods for automated processing of clinical text are emerging.
The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality,
and a road map for the technology development.
Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow.
Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising.
Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality.
The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.Siirretty Doriast
- …