2,001 research outputs found

    Natural Language Processing in-and-for Design Research

    Full text link
    We review the scholarly contributions that utilise Natural Language Processing (NLP) methods to support the design process. Using a heuristic approach, we collected 223 articles published in 32 journals and within the period 1991-present. We present state-of-the-art NLP in-and-for design research by reviewing these articles according to the type of natural language text sources: internal reports, design concepts, discourse transcripts, technical publications, consumer opinions, and others. Upon summarizing and identifying the gaps in these contributions, we utilise an existing design innovation framework to identify the applications that are currently being supported by NLP. We then propose a few methodological and theoretical directions for future NLP in-and-for design research

    Project Triton : A study into delivering targeted information to an individual based on implicit and explicit data.

    No full text
    The World Wide Web is frequently seen as a source of knowledge, however much of this remains undiscovered by its users. In recent times, recommender systems (e.g. Digg and Last.fm) have attempted to bridge this gap, alerting users to previously untapped knowledge. As more socially oriented services appear on the Web (e.g. Facebook and MySpace), it has never been easier to obtain information pertaining to an individual’s interests. At present, solutions for automated data recommendation tend to be highly topic specific (recommending only a certain topic such as news) and often only allow access to the system using monolithic interfaces. This report hopes to detail the stages from research to evaluation involved in creating an extensible framework, which will operate without the need for human intervention. The framework will feature several proof-of-concept plugins residing in a custom workflow, which target information that is useful to the user. Information will be retrieved automatically through plugins involved with data gathering (such as feed processing and page scraping), while users’ interests will be obtained implicitly (for example, using header information to derive location) or explicitly (taking advantage of Social Network APIs such as Facebook Connect). Finally, Third Parties will be able to integrate the framework into their own solutions using the customisable XML API (written in PHP), so that their products can provide custom user interfaces without style constraints

    Impact of obstructive sleep apnoea and experiences of using positive airway pressure

    Get PDF
    The aim of this thesis was to explore the impact of the common sleep-related breathing disorder, obstructive sleep apnoea (OSA); specifically for people with a bipolar disorder (BD) diagnosis but also the wider experience of the first-line treatment for OSA, positive airway pressure (PAP). Chapter 1 is a systematic literature review and thematic synthesis of experiences using PAP to treat OSA. Twenty-five papers were reviewed and included in the thematic synthesis. The quality of each paper was appraised and considered in relation to contribution to the resultant analytical themes. The metasynthesis gave voice to user experiences of PAP and revealed barriers to PAP use at a healthcare service level. The findings highlight the need for a biopsychosocial approach and long-term person-centred support to enhance PAP use. Chapter 2 is a primary empirical research paper on an investigation as to whether people with suspected-OSA and a BD diagnosis experience more sleep and affect instability when “inter-episode” compared to people with a BD diagnosis alone. Ecological momentary assessment was utilised. Eighteen participants (twelve with suspected-OSA) wore an acitgraph for two weeks whilst completing an affect questionnaire twice daily. Measures of instability were calculated using the mean squared successive difference and probability of acute change indices. The groups were not found to significantly differ other than reduced sleep efficiency in the suspected-OSA group. However, only 48% of the intended sample was successfully recruited due to the COVID-19 pandemic. Important avenues for further research are highlighted. Chapter 3 is a critical appraisal of the thesis. Salient issues relevant to future research and clinical practice are discussed, in addition to the under recognised clinical issue of sleep which inspired this thesis

    Arkansas Tech Writing, 15th Edition

    Get PDF
    This is the fifteenth edition of a text that was first published in 1989 as Assignments and Models for English 2053. Carl Brucker is a Professor of English in the Department of English and World Languages at Arkansas Tech University, where he has taught technical writing and American literature since 1984. This text includes assignments, examples, and images supplied by Tech professors and staff members.https://orc.library.atu.edu/atu_oer/1001/thumbnail.jp

    Hospital health care executives\u27 attitudes and beliefs on the impact that the Healthcare Providers and Systems survey has on service quality and hospital reimbursement

    Get PDF
    This study surveyed 314 hospital health care executives\u27 attitudes and beliefs on the impact that the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey had on service quality levels and hospital reimbursements from Centers for Medicare and Medicaid Services (HCAHPS, 2008). Additionally, this study reviewed the increase in service quality levels as measured by HCAHPS since its inception in 2006. Consumers now have access to data that previously was unavailable to them. If consumers see that a hospital has higher HCAHPS scores than a competing hospital in the area, the hospital with the higher scores should attract more patients. This study provides a research base of information that can be used as comparative data for other surveys conducted by those seeking to validate the effectiveness of the HCAHPS survey. A simple one-page 10-question survey was developed by this researcher. HCAHPS Survey Average Aggregate Scores increased by one full percentage point for each of the targeted areas. This indicates that over the past 4 years, the perception of healthcare in the United States has increased slightly. The survey found that 82.2% agreed that service quality is the primary driver of their organization, 73.2% agreed that HCAHPS is the proper tool to measure service quality, 61.1% agreed that having HCAHPS data publicly shared is positive, and 56.7% agreed that HCAHPS should be used to justify CMIS reimbursement. 6 of the 15 demographic variables were significantly correlated with the aggregated scores. Specifically, higher aggregated scores were related to: (a) higher Hospital\u27s HCAHPS Overall Rating (r = .80); (b) being a CEO (r = .19); not being a COO (r = -.16); and (c) position of the hospital healthcare executive. Additionally, hospitals located in the West region (r = .22) as well as hospitals that identified themselves as being rural (r = .18) also showed significant correlation. Finally, the hospital\u27s number of licensed beds (r = -.25) was also significantly correlated with the 4 research questions

    Adaptive hypertext and hypermedia : workshop : proceedings, 3rd, Sonthofen, Germany, July 14, 2001 and Aarhus, Denmark, August 15, 2001

    Get PDF
    This paper presents two empirical usability studies based on techniques from Human-Computer Interaction (HeI) and software engineering, which were used to elicit requirements for the design of a hypertext generation system. Here we will discuss the findings of these studies, which were used to motivate the choice of adaptivity techniques. The results showed dependencies between different ways to adapt the explanation content and the document length and formatting. Therefore, the system's architecture had to be modified to cope with this requirement. In addition, the system had to be made adaptable, in addition to being adaptive, in order to satisfy the elicited users' preferences

    Cross-Domain information extraction from scientific articles for research knowledge graphs

    Get PDF
    Today’s scholarly communication is a document-centred process and as such, rather inefficient. Fundamental contents of research papers are not accessible by computers since they are only present in unstructured PDF files. Therefore, current research infrastructures are not able to assist scientists appropriately in their core research tasks. This thesis addresses this issue and proposes methods to automatically extract relevant information from scientific articles for Research Knowledge Graphs (RKGs) that represent scholarly knowledge structured and interlinked. First, this thesis conducts a requirements analysis for an Open Research Knowledge Graph (ORKG). We present literature-related use cases of researchers that should be supported by an ORKG-based system and their specific requirements for the underlying ontology and instance data. Based on this analysis, the identified use cases are categorised into two groups: The first group of use cases needs manual or semi-automatic approaches for knowledge graph (KG) construction since they require high correctness of the instance data. The second group requires high completeness and can tolerate noisy instance data. Thus, this group needs automatic approaches for KG population. This thesis focuses on the second group of use cases and provides contributions for machine learning tasks that aim to support them. To assess the relevance of a research paper, scientists usually skim through titles, abstracts, introductions, and conclusions. An organised presentation of the articles' essential information would make this process more time-efficient. The task of sequential sentence classification addresses this issue by classifying sentences in an article in categories like research problem, used methods, or obtained results. To address this problem, we propose a novel unified cross-domain multi-task deep learning approach that makes use of datasets from different scientific domains (e.g. biomedicine and computer graphics) and varying structures (e.g. datasets covering either only abstracts or full papers). Our approach outperforms the state of the art on full paper datasets significantly while being competitive for datasets consisting of abstracts. Moreover, our approach enables the categorisation of sentences in a domain-independent manner. Furthermore, we present the novel task of domain-independent information extraction to extract scientific concepts from research papers in a domain-independent manner. This task aims to support the use cases find related work and get recommended articles. For this purpose, we introduce a set of generic scientific concepts that are relevant over ten domains in Science, Technology, and Medicine (STM) and release an annotated dataset of 110 abstracts from these domains. Since the annotation of scientific text is costly, we suggest an active learning strategy based on a state-of-the-art deep learning approach. The proposed method enables us to nearly halve the amount of required training data. Then, we extend this domain-independent information extraction approach with the task of \textit{coreference resolution}. Coreference resolution aims to identify mentions that refer to the same concept or entity. Baseline results on our corpus with current state-of-the-art approaches for coreference resolution showed that current approaches perform poorly on scientific text. Therefore, we propose a sequential transfer learning approach that exploits annotated datasets from non-academic domains. Our experimental results demonstrate that our approach noticeably outperforms the state-of-the-art baselines. Additionally, we investigate the impact of coreference resolution on KG population. We demonstrate that coreference resolution has a small impact on the number of resulting concepts in the KG, but improved its quality significantly. Consequently, using our domain-independent information extraction approach, we populate an RKG from 55,485 abstracts of the ten investigated STM domains. We show that every domain mainly uses its own terminology and that the populated RKG contains useful concepts. Moreover, we propose a novel approach for the task of \textit{citation recommendation}. This task can help researchers improve the quality of their work by finding or recommending relevant related work. Our approach exploits RKGs that interlink research papers based on mentioned scientific concepts. Using our automatically populated RKG, we demonstrate that the combination of information from RKGs with existing state-of-the-art approaches is beneficial. Finally, we conclude the thesis and sketch possible directions of future work.Die Kommunikation von Forschungsergebnissen erfolgt heutzutage in Form von Dokumenten und ist aus verschiedenen GrĂŒnden ineffizient. Wesentliche Inhalte von Forschungsarbeiten sind fĂŒr Computer nicht zugĂ€nglich, da sie in unstrukturierten PDF-Dateien verborgen sind. Daher können derzeitige Forschungsinfrastrukturen Forschende bei ihren Kernaufgaben nicht angemessen unterstĂŒtzen. Diese Arbeit befasst sich mit dieser Problemstellung und untersucht Methoden zur automatischen Extraktion von relevanten Informationen aus Forschungspapieren fĂŒr Forschungswissensgraphen (Research Knowledge Graphs). Solche Graphen sollen wissenschaftliches Wissen maschinenlesbar strukturieren und verknĂŒpfen. ZunĂ€chst wird eine Anforderungsanalyse fĂŒr einen Open Research Knowledge Graph (ORKG) durchgefĂŒhrt. Wir stellen literaturbezogene AnwendungsfĂ€lle von Forschenden vor, die durch ein ORKG-basiertes System unterstĂŒtzt werden sollten, und deren spezifische Anforderungen an die zugrundeliegende Ontologie und die Instanzdaten. Darauf aufbauend werden die identifizierten AnwendungsfĂ€lle in zwei Gruppen eingeteilt: Die erste Gruppe von AnwendungsfĂ€llen benötigt manuelle oder halbautomatische AnsĂ€tze fĂŒr die Konstruktion eines ORKG, da sie eine hohe Korrektheit der Instanzdaten erfordern. Die zweite Gruppe benötigt eine hohe VollstĂ€ndigkeit der Instanzdaten und kann fehlerhafte Daten tolerieren. Daher erfordert diese Gruppe automatische AnsĂ€tze fĂŒr die Konstruktion des ORKG. Diese Arbeit fokussiert sich auf die zweite Gruppe von AnwendungsfĂ€llen und schlĂ€gt Methoden fĂŒr maschinelle Aufgabenstellungen vor, die diese AnwendungsfĂ€lle unterstĂŒtzen können. Um die Relevanz eines Forschungsartikels effizient beurteilen zu können, schauen sich Forschende in der Regel die Titel, Zusammenfassungen, Einleitungen und Schlussfolgerungen an. Durch eine strukturierte Darstellung von wesentlichen Informationen des Artikels könnte dieser Prozess zeitsparender gestaltet werden. Die Aufgabenstellung der sequenziellen Satzklassifikation befasst sich mit diesem Problem, indem SĂ€tze eines Artikels in Kategorien wie Forschungsproblem, verwendete Methoden oder erzielte Ergebnisse automatisch klassifiziert werden. In dieser Arbeit wird fĂŒr diese Aufgabenstellung ein neuer vereinheitlichter Multi-Task Deep-Learning-Ansatz vorgeschlagen, der DatensĂ€tze aus verschiedenen wissenschaftlichen Bereichen (z. B. Biomedizin und Computergrafik) mit unterschiedlichen Strukturen (z. B. DatensĂ€tze bestehend aus Zusammenfassungen oder vollstĂ€ndigen Artikeln) nutzt. Unser Ansatz ĂŒbertrifft State-of-the-Art-Verfahren der Literatur auf Benchmark-DatensĂ€tzen bestehend aus vollstĂ€ndigen Forschungsartikeln. Außerdem ermöglicht unser Ansatz die Klassifizierung von SĂ€tzen auf eine domĂ€nenunabhĂ€ngige Weise. DarĂŒber hinaus stellen wir die neue Aufgabenstellung domĂ€nenĂŒbergreifende Informationsextraktion vor. Hierbei werden, unabhĂ€ngig vom behandelten wissenschaftlichen Fachgebiet, inhaltliche Konzepte aus Forschungspapieren extrahiert. Damit sollen die AnwendungsfĂ€lle Finden von verwandten Arbeiten und Empfehlung von Artikeln unterstĂŒtzt werden. Zu diesem Zweck fĂŒhren wir eine Reihe von generischen wissenschaftlichen Konzepten ein, die in zehn Bereichen der Wissenschaft, Technologie und Medizin (STM) relevant sind, und veröffentlichen einen annotierten Datensatz von 110 Zusammenfassungen aus diesen Bereichen. Da die Annotation wissenschaftlicher Texte aufwĂ€ndig ist, kombinieren wir ein Active-Learning-Verfahren mit einem aktuellen Deep-Learning-Ansatz, um die notwendigen Trainingsdaten zu reduzieren. Die vorgeschlagene Methode ermöglicht es uns, die Menge der erforderlichen Trainingsdaten nahezu zu halbieren. Anschließend erweitern wir unseren domĂ€nenunabhĂ€ngigen Ansatz zur Informationsextraktion um die Aufgabe der Koreferenzauflösung. Die Auflösung von Koreferenzen zielt darauf ab, ErwĂ€hnungen zu identifizieren, die sich auf dasselbe Konzept oder dieselbe EntitĂ€t beziehen. Experimentelle Ergebnisse auf unserem Korpus mit aktuellen AnsĂ€tzen zur Koreferenzauflösung haben gezeigt, dass diese bei wissenschaftlichen Texten unzureichend abschneiden. Daher schlagen wir eine Transfer-Learning-Methode vor, die annotierte DatensĂ€tze aus nicht-akademischen Bereichen nutzt. Die experimentellen Ergebnisse zeigen, dass unser Ansatz deutlich besser abschneidet als die bisherigen AnsĂ€tze. DarĂŒber hinaus untersuchen wir den Einfluss der Koreferenzauflösung auf die Erstellung von Wissensgraphen. Wir zeigen, dass diese einen geringen Einfluss auf die Anzahl der resultierenden Konzepte in dem Wissensgraphen hat, aber die QualitĂ€t des Wissensgraphen deutlich verbessert. Mithilfe unseres domĂ€nenunabhĂ€ngigen Ansatzes zur Informationsextraktion haben wir aus 55.485 Zusammenfassungen der zehn untersuchten STM-DomĂ€nen einen Forschungswissensgraphen erstellt. Unsere Analyse zeigt, dass jede DomĂ€ne hauptsĂ€chlich ihre eigene Terminologie verwendet und dass der erstellte Wissensgraph nĂŒtzliche Konzepte enthĂ€lt. Schließlich schlagen wir einen Ansatz fĂŒr die Empfehlung von passenden Referenzen vor. Damit können Forschende einfacher relevante verwandte Arbeiten finden oder passende Empfehlungen erhalten. Unser Ansatz nutzt Forschungswissensgraphen, die Forschungsarbeiten mit in ihnen erwĂ€hnten wissenschaftlichen Konzepten verknĂŒpfen. Wir zeigen, dass aktuelle Verfahren zur Empfehlung von Referenzen von zusĂ€tzlichen Informationen aus einem automatisch erstellten Wissensgraphen profitieren. Zum Schluss wird ein Fazit gezogen und ein Ausblick fĂŒr mögliche zukĂŒnftige Arbeiten gegeben

    Adaptive hypertext and hypermedia : workshop : proceedings, 3rd, Sonthofen, Germany, July 14, 2001 and Aarhus, Denmark, August 15, 2001

    Get PDF
    This paper presents two empirical usability studies based on techniques from Human-Computer Interaction (HeI) and software engineering, which were used to elicit requirements for the design of a hypertext generation system. Here we will discuss the findings of these studies, which were used to motivate the choice of adaptivity techniques. The results showed dependencies between different ways to adapt the explanation content and the document length and formatting. Therefore, the system's architecture had to be modified to cope with this requirement. In addition, the system had to be made adaptable, in addition to being adaptive, in order to satisfy the elicited users' preferences

    Proceedings of the First International Workshop on Mashup Personal Learning Environments

    Get PDF
    Wild, F., Kalz, M., & Palmér, M. (Eds.) (2008). Proceedings of the First International Workshop on Mashup Personal Learning Environments (MUPPLE08). September, 17, 2008, Maastricht, The Netherlands: CEUR Workshop Proceedings, ISSN 1613-0073. Available at http://ceur-ws.org/Vol-388.The work on this publication has been sponsored by the TENCompetence Integrated Project (funded by the European Commission's 6th Framework Programme, priority IST/Technology Enhanced Learning. Contract 027087 [http://www.tencompetence.org]) and partly sponsored by the LTfLL project (funded by the European Commission's 7th Framework Programme, priority ISCT. Contract 212578 [http://www.ltfll-project.org
    • 

    corecore