7,252 research outputs found

    Converging organoids and extracellular matrix::New insights into liver cancer biology

    Get PDF

    Improving diagnostic procedures for epilepsy through automated recording and analysis of patients’ history

    Get PDF
    Transient loss of consciousness (TLOC) is a time-limited state of profound cognitive impairment characterised by amnesia, abnormal motor control, loss of responsiveness, a short duration and complete recovery. Most instances of TLOC are caused by one of three health conditions: epilepsy, functional (dissociative) seizures (FDS), or syncope. There is often a delay before the correct diagnosis is made and 10-20% of individuals initially receive an incorrect diagnosis. Clinical decision tools based on the endorsement of TLOC symptom lists have been limited to distinguishing between two causes of TLOC. The Initial Paroxysmal Event Profile (iPEP) has shown promise but was demonstrated to have greater accuracy in distinguishing between syncope and epilepsy or FDS than between epilepsy and FDS. The objective of this thesis was to investigate whether interactional, linguistic, and communicative differences in how people with epilepsy and people with FDS describe their experiences of TLOC can improve the predictive performance of the iPEP. An online web application was designed that collected information about TLOC symptoms and medical history from patients and witnesses using a binary questionnaire and verbal interaction with a virtual agent. We explored potential methods of automatically detecting these communicative differences, whether the differences were present during an interaction with a VA, to what extent these automatically detectable communicative differences improve the performance of the iPEP, and the acceptability of the application from the perspective of patients and witnesses. The two feature sets that were applied to previous doctor-patient interactions, features designed to measure formulation effort or detect semantic differences between the two groups, were able to predict the diagnosis with an accuracy of 71% and 81%, respectively. Individuals with epilepsy or FDS provided descriptions of TLOC to the VA that were qualitatively like those observed in previous research. Both feature sets were effective predictors of the diagnosis when applied to the web application recordings (85.7% and 85.7%). Overall, the accuracy of machine learning models trained for the threeway classification between epilepsy, FDS, and syncope using the iPEP responses from patients that were collected through the web application was worse than the performance observed in previous research (65.8% vs 78.3%), but the performance was increased by the inclusion of features extracted from the spoken descriptions on TLOC (85.5%). Finally, most participants who provided feedback reported that the online application was acceptable. These findings suggest that it is feasible to differentiate between people with epilepsy and people with FDS using an automated analysis of spoken seizure descriptions. Furthermore, incorporating these features into a clinical decision tool for TLOC can improve the predictive performance by improving the differential diagnosis between these two health conditions. Future research should use the feedback to improve the design of the application and increase perceived acceptability of the approach

    Genomics of cold adaptations in the Antarctic notothenioid fish radiation

    Get PDF
    Numerous novel adaptations characterise the radiation of notothenioids, the dominant fish group in the freezing seas of the Southern Ocean. To improve understanding of the evolution of this iconic fish group, here we generate and analyse new genome assemblies for 24 species covering all major subgroups of the radiation, including five long-read assemblies. We present a new estimate for the onset of the radiation at 10.7 million years ago, based on a time-calibrated phylogeny derived from genome-wide sequence data. We identify a two-fold variation in genome size, driven by expansion of multiple transposable element families, and use the long-read data to reconstruct two evolutionarily important, highly repetitive gene family loci. First, we present the most complete reconstruction to date of the antifreeze glycoprotein gene family, whose emergence enabled survival in sub-zero temperatures, showing the expansion of the antifreeze gene locus from the ancestral to the derived state. Second, we trace the loss of haemoglobin genes in icefishes, the only vertebrates lacking functional haemoglobins, through complete reconstruction of the two haemoglobin gene clusters across notothenioid families. Both the haemoglobin and antifreeze genomic loci are characterised by multiple transposon expansions that may have driven the evolutionary history of these genes

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Tradition and Innovation in Construction Project Management

    Get PDF
    This book is a reprint of the Special Issue 'Tradition and Innovation in Construction Project Management' that was published in the journal Buildings

    Optimizing lineup construction: the impact of filler similarity, distinctive facial features, and individual differences in facial recognition on lineup performance

    Get PDF
    The present thesis sought to investigate optimal lineup construction methods that enhance ability to discriminate between innocent and guilty suspects. Firstly, I highlight the need for lineup construction methods that enhance eyewitness discriminability. Next, in chapter 2, I conducted a systematic literature review of suspect-filler similarity and found that there were no standardised procedures for constructing lineups in experiments. I also highlight the impact of methodological factors on discriminability. Using the feature matching model and diagnostic feature detection theory (Colloff et al., 2021; Wixted & Mickes, 2014), I argue that low similarity lineups allow the witness to focus on the perpetrators’ unique features that are diagnostic of guilt, to make an accurate identification decision. However, I note that the low similarity lineup advantage holds only when lineup construction methods are fair (i.e., result in two memory strength distributions in witness memory: one for the perpetrator and one for the fillers and innocent suspect). In chapter 3, I conducted an experiment investigating lineup construction methods for distinctive suspects (e.g., with a facial tattoo). I compared a high similarity replication lineup, in which the distinctive suspects’ facial tattoo is exactly replicated across lineup members; a low similarity replication lineup, in which the lineup members have a similar but non-identical distinctive facial tattoo; and a do nothing lineup in which only the suspect has a distinctive feature. As predicted by the feature matching model, I found that low similarity replication lineups yield higher discriminability compared to high similarity replication and do nothing lineups. In chapter 4, I critically evaluate the Benton Facial Recognition Test and advise on the use of psychometric tools to allow for further exploration of lineup construction methods that enhance discriminability when individual differences are also considered. Finally in chapter 5, I disentangle mixed findings in the literature to date and recommend that future research thoroughly reports lineup construction methods

    Transparent Forecasting Strategies in Database Management Systems

    Get PDF
    Whereas traditional data warehouse systems assume that data is complete or has been carefully preprocessed, increasingly more data is imprecise, incomplete, and inconsistent. This is especially true in the context of big data, where massive amount of data arrives continuously in real-time from vast data sources. Nevertheless, modern data analysis involves sophisticated statistical algorithm that go well beyond traditional BI and, additionally, is increasingly performed by non-expert users. Both trends require transparent data mining techniques that efficiently handle missing data and present a complete view of the database to the user. Time series forecasting estimates future, not yet available, data of a time series and represents one way of dealing with missing data. Moreover, it enables queries that retrieve a view of the database at any point in time - past, present, and future. This article presents an overview of forecasting techniques in database management systems. After discussing possible application areas for time series forecasting, we give a short mathematical background of the main forecasting concepts. We then outline various general strategies of integrating time series forecasting inside a database and discuss some individual techniques from the database community. We conclude this article by introducing a novel forecasting-enabled database management architecture that natively and transparently integrates forecast models

    The Politics of Platformization: Amsterdam Dialogues on Platform Theory

    Get PDF
    What is platformization and why is it a relevant category in the contemporary political landscape? How is it related to cybernetics and the history of computation? This book tries to answer such questions by engaging in multidisciplinary dialogues about the first ten years of the emerging fields of platform studies and platform theory. It deploys a narrative and playful approach that makes use of anecdotes, personal histories, etymologies, and futurable speculations to investigate both the fragmented genealogy that led to platformization and the organizational and economic trends that guide nowadays platform sociotechnical imaginaries
    • …
    corecore