472 research outputs found
Measuring the impact of COVID-19 on hospital care pathways
Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted
Machine Learning Algorithm for the Scansion of Old Saxon Poetry
Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools
deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We
implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon
and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and
we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm
reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested
the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that
the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input
verses
Recommended from our members
Results of the Ontology Alignment Evaluation Initiative 2023
The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity and use different evaluation modalities. The OAEI 2023 campaign offered 15 tracks and was attended by 16 participants. This paper is an overall presentation of that campaign
Google search and the mediation of digital health information: a case study on unproven stem cell treatments
Google Search occupies a unique space within broader discussions of direct-to-consumer marketing of stem cell treatments in digital spaces. For patients, researchers, regulators, and the wider public, the search platform influences the who, what, where, and why of stem cell treatment information online. Ubiquitous and opaque, Google Search mediates which users are presented what types of content when these stakeholders engage in online searches around health information. The platform also sways the activities of content producers and the characteristics of the content they produce. For those seeking and studying information on digital health, this platform influence raises difficult questions around risk, authority, intervention, and oversight.
This thesis addresses a critical gap in digital methodologies used in mapping and characterising that influence as part of wider debates around algorithmic accountability within STS and digital health scholarship. By adopting a novel methodological approach to Blackbox auditing and data collection, I provide a unique evidentiary base for the analysis of ads, organic results, and the platform mechanisms of influence on queries related to stem cell treatments. I explore the question: how does Google Search mediate information that people access online about ‘proven’ and ‘unproven’ stem cell treatments?
Here I show that, in spite of a general ban on advertisements of stem cell treatments, users continue to be presented with content promoting unproven treatments. The types, frequency, and commercial intent of results related to stem cell treatments shifted across user groups including geography and, more troublingly, those impacted by Parkinson’s Disease and Multiple Sclerosis.
Additionally, I find evidence that the technological structure of Google Search itself enables primary and secondary commercial activities around the mediation and dissemination of health information online. It suggests that Google Search’s algorithmically-mediated rendering of search results – including both commercial and non-commercial activities - has critical implications for the present and future of digital health studies
Geo-L: Topological Link Discovery for Geospatial Linked Data Made Easy
Geospatial linked data are an emerging domain, with growing interest in research and the industry. There is an increasing number of publicly available geospatial linked data resources, which can also be interlinked and easily integrated with private and industrial linked data on the web. The present paper introduces Geo-L, a system for the discovery of RDF spatial links based on topological relations. Experiments show that the proposed system improves state-of-the-art spatial linking processes in terms of mapping time and accuracy, as well as concerning resources retrieval efficiency and robustness
Geographic information extraction from texts
A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction
SMAP: A Novel Heterogeneous Information Framework for Scenario-based Optimal Model Assignment
The increasing maturity of big data applications has led to a proliferation
of models targeting the same objectives within the same scenarios and datasets.
However, selecting the most suitable model that considers model's features
while taking specific requirements and constraints into account still poses a
significant challenge. Existing methods have focused on worker-task assignments
based on crowdsourcing, they neglect the scenario-dataset-model assignment
problem. To address this challenge, a new problem named the Scenario-based
Optimal Model Assignment (SOMA) problem is introduced and a novel framework
entitled Scenario and Model Associative percepts (SMAP) is developed. SMAP is a
heterogeneous information framework that can integrate various types of
information to intelligently select a suitable dataset and allocate the optimal
model for a specific scenario. To comprehensively evaluate models, a new score
function that utilizes multi-head attention mechanisms is proposed. Moreover, a
novel memory mechanism named the mnemonic center is developed to store the
matched heterogeneous information and prevent duplicate matching. Six popular
traffic scenarios are selected as study cases and extensive experiments are
conducted on a dataset to verify the effectiveness and efficiency of SMAP and
the score function
A Creative Data Ontology for the Moving Image Industry
The moving image industry produces an extremely large amount of data and associated metadata for each media creation project, often in the range of terabytes. The current methods used to organise, track, and retrieve the metadata are inadequate, with metadata often being hard to find. The aim of this thesis is to explore whether there is a practical use case for using ontologies to manage metadata in the moving image industry and to determine whether an ontology can be designed for such a purpose and can be used to manage metadata more efficiently to improve workflows. It presents a domain ontology, hereby referred to as the Creative Data Ontology, engineered around a set of metadata fields provided by Evolutions, Double Negative (DNEG), and Pinewood Studios, and four use cases. The Creative Data Ontology is then evaluated using both quantitative methods and qualitative methods (via interviews) with domain and ontology experts.Our findings suggest that there is a practical use case for an ontology-based metadata management solution in the moving image industry. However, it would need to be presented carefully to non-technical users, such as domain experts, as they are likely to experience a steep learning curve. The Creative Data Ontology itself meets the criteria for a high-quality ontology for the sub-sectors of the moving image industry domain that it provides coverage for (i.e. scripted film and television, visual effects, and unscripted television) and it provides a good foundation for expanding into other sub-sectors of the industry, although it cannot yet be considered a ``standard'' ontology. Finally, the thesis presents the methodological process taken to develop the Creative Data Ontology and the lessons learned during the ontology engineering process which can be valuable guidance for designers and developers of future metadata ontologies. We believe such guidance could be transferable across many domains where an ontology of metadata is required, which are unrelated to the moving image industry. Future research may focus on assisting non-technical users to overcome the learning curve, which may also also applicable to other domains that may choose to use ontologies in the future
WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM
Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments
- …