8 research outputs found

    Thinking outside the graph: scholarly knowledge graph construction leveraging natural language processing

    Get PDF
    Despite improved digital access to scholarly knowledge in recent decades, scholarly communication remains exclusively document-based. The document-oriented workflows in science publication have reached the limits of adequacy as highlighted by recent discussions on the increasing proliferation of scientific literature, the deficiency of peer-review and the reproducibility crisis. In this form, scientific knowledge remains locked in representations that are inadequate for machine processing. As long as scholarly communication remains in this form, we cannot take advantage of all the advancements taking place in machine learning and natural language processing techniques. Such techniques would facilitate the transformation from pure text based into (semi-)structured semantic descriptions that are interlinked in a collection of big federated graphs. We are in dire need for a new age of semantically enabled infrastructure adept at storing, manipulating, and querying scholarly knowledge. Equally important is a suite of machine assistance tools designed to populate, curate, and explore the resulting scholarly knowledge graph. In this thesis, we address the issue of constructing a scholarly knowledge graph using natural language processing techniques. First, we tackle the issue of developing a scholarly knowledge graph for structured scholarly communication, that can be populated and constructed automatically. We co-design and co-implement the Open Research Knowledge Graph (ORKG), an infrastructure capable of modeling, storing, and automatically curating scholarly communications. Then, we propose a method to automatically extract information into knowledge graphs. With Plumber, we create a framework to dynamically compose open information extraction pipelines based on the input text. Such pipelines are composed from community-created information extraction components in an effort to consolidate individual research contributions under one umbrella. We further present MORTY as a more targeted approach that leverages automatic text summarization to create from the scholarly article's text structured summaries containing all required information. In contrast to the pipeline approach, MORTY only extracts the information it is instructed to, making it a more valuable tool for various curation and contribution use cases. Moreover, we study the problem of knowledge graph completion. exBERT is able to perform knowledge graph completion tasks such as relation and entity prediction tasks on scholarly knowledge graphs by means of textual triple classification. Lastly, we use the structured descriptions collected from manual and automated sources alike with a question answering approach that builds on the machine-actionable descriptions in the ORKG. We propose JarvisQA, a question answering interface operating on tabular views of scholarly knowledge graphs i.e., ORKG comparisons. JarvisQA is able to answer a variety of natural language questions, and retrieve complex answers on pre-selected sub-graphs. These contributions are key in the broader agenda of studying the feasibility of natural language processing methods on scholarly knowledge graphs, and lays the foundation of which methods can be used on which cases. Our work indicates what are the challenges and issues with automatically constructing scholarly knowledge graphs, and opens up future research directions

    Advanced Methods for Entity Linking in the Life Sciences

    Get PDF
    The amount of knowledge increases rapidly due to the increasing number of available data sources. However, the autonomy of data sources and the resulting heterogeneity prevent comprehensive data analysis and applications. Data integration aims to overcome heterogeneity by unifying different data sources and enriching unstructured data. The enrichment of data consists of different subtasks, amongst other the annotation process. The annotation process links document phrases to terms of a standardized vocabulary. Annotated documents enable effective retrieval methods, comparability of different documents, and comprehensive data analysis, such as finding adversarial drug effects based on patient data. A vocabulary allows the comparability using standardized terms. An ontology can also represent a vocabulary, whereas concepts, relationships, and logical constraints additionally define an ontology. The annotation process is applicable in different domains. Nevertheless, there is a difference between generic and specialized domains according to the annotation process. This thesis emphasizes the differences between the domains and addresses the identified challenges. The majority of annotation approaches focuses on the evaluation of general domains, such as Wikipedia. This thesis evaluates the developed annotation approaches with case report forms that are medical documents for examining clinical trials. The natural language provides different challenges, such as similar meanings using different phrases. The proposed annotation method, AnnoMap, considers the fuzziness of natural language. A further challenge is the reuse of verified annotations. Existing annotations represent knowledge that can be reused for further annotation processes. AnnoMap consists of a reuse strategy that utilizes verified annotations to link new documents to appropriate concepts. Due to the broad spectrum of areas in the biomedical domain, different tools exist. The tools perform differently regarding a particular domain. This thesis proposes a combination approach to unify results from different tools. The method utilizes existing tool results to build a classification model that can classify new annotations as correct or incorrect. The results show that the reuse and the machine learning-based combination improve the annotation quality compared to existing approaches focussing on the biomedical domain. A further part of data integration is entity resolution to build unified knowledge bases from different data sources. A data source consists of a set of records characterized by attributes. The goal of entity resolution is to identify records representing the same real-world entity. Many methods focus on linking data sources consisting of records being characterized by attributes. Nevertheless, only a few methods can handle graph-structured knowledge bases or consider temporal aspects. The temporal aspects are essential to identify the same entities over different time intervals since these aspects underlie certain conditions. Moreover, records can be related to other records so that a small graph structure exists for each record. These small graphs can be linked to each other if they represent the same. This thesis proposes an entity resolution approach for census data consisting of person records for different time intervals. The approach also considers the graph structure of persons given by family relationships. For achieving qualitative results, current methods apply machine-learning techniques to classify record pairs as the same entity. The classification task used a model that is generated by training data. In this case, the training data is a set of record pairs that are labeled as a duplicate or not. Nevertheless, the generation of training data is a time-consuming task so that active learning techniques are relevant for reducing the number of training examples. The entity resolution method for temporal graph-structured data shows an improvement compared to previous collective entity resolution approaches. The developed active learning approach achieves comparable results to supervised learning methods and outperforms other limited budget active learning methods. Besides the entity resolution approach, the thesis introduces the concept of evolution operators for communities. These operators can express the dynamics of communities and individuals. For instance, we can formulate that two communities merged or split over time. Moreover, the operators allow observing the history of individuals. Overall, the presented annotation approaches generate qualitative annotations for medical forms. The annotations enable comprehensive analysis across different data sources as well as accurate queries. The proposed entity resolution approaches improve existing ones so that they contribute to the generation of qualitative knowledge graphs and data analysis tasks

    Cross-Domain information extraction from scientific articles for research knowledge graphs

    Get PDF
    Today’s scholarly communication is a document-centred process and as such, rather inefficient. Fundamental contents of research papers are not accessible by computers since they are only present in unstructured PDF files. Therefore, current research infrastructures are not able to assist scientists appropriately in their core research tasks. This thesis addresses this issue and proposes methods to automatically extract relevant information from scientific articles for Research Knowledge Graphs (RKGs) that represent scholarly knowledge structured and interlinked. First, this thesis conducts a requirements analysis for an Open Research Knowledge Graph (ORKG). We present literature-related use cases of researchers that should be supported by an ORKG-based system and their specific requirements for the underlying ontology and instance data. Based on this analysis, the identified use cases are categorised into two groups: The first group of use cases needs manual or semi-automatic approaches for knowledge graph (KG) construction since they require high correctness of the instance data. The second group requires high completeness and can tolerate noisy instance data. Thus, this group needs automatic approaches for KG population. This thesis focuses on the second group of use cases and provides contributions for machine learning tasks that aim to support them. To assess the relevance of a research paper, scientists usually skim through titles, abstracts, introductions, and conclusions. An organised presentation of the articles' essential information would make this process more time-efficient. The task of sequential sentence classification addresses this issue by classifying sentences in an article in categories like research problem, used methods, or obtained results. To address this problem, we propose a novel unified cross-domain multi-task deep learning approach that makes use of datasets from different scientific domains (e.g. biomedicine and computer graphics) and varying structures (e.g. datasets covering either only abstracts or full papers). Our approach outperforms the state of the art on full paper datasets significantly while being competitive for datasets consisting of abstracts. Moreover, our approach enables the categorisation of sentences in a domain-independent manner. Furthermore, we present the novel task of domain-independent information extraction to extract scientific concepts from research papers in a domain-independent manner. This task aims to support the use cases find related work and get recommended articles. For this purpose, we introduce a set of generic scientific concepts that are relevant over ten domains in Science, Technology, and Medicine (STM) and release an annotated dataset of 110 abstracts from these domains. Since the annotation of scientific text is costly, we suggest an active learning strategy based on a state-of-the-art deep learning approach. The proposed method enables us to nearly halve the amount of required training data. Then, we extend this domain-independent information extraction approach with the task of \textit{coreference resolution}. Coreference resolution aims to identify mentions that refer to the same concept or entity. Baseline results on our corpus with current state-of-the-art approaches for coreference resolution showed that current approaches perform poorly on scientific text. Therefore, we propose a sequential transfer learning approach that exploits annotated datasets from non-academic domains. Our experimental results demonstrate that our approach noticeably outperforms the state-of-the-art baselines. Additionally, we investigate the impact of coreference resolution on KG population. We demonstrate that coreference resolution has a small impact on the number of resulting concepts in the KG, but improved its quality significantly. Consequently, using our domain-independent information extraction approach, we populate an RKG from 55,485 abstracts of the ten investigated STM domains. We show that every domain mainly uses its own terminology and that the populated RKG contains useful concepts. Moreover, we propose a novel approach for the task of \textit{citation recommendation}. This task can help researchers improve the quality of their work by finding or recommending relevant related work. Our approach exploits RKGs that interlink research papers based on mentioned scientific concepts. Using our automatically populated RKG, we demonstrate that the combination of information from RKGs with existing state-of-the-art approaches is beneficial. Finally, we conclude the thesis and sketch possible directions of future work.Die Kommunikation von Forschungsergebnissen erfolgt heutzutage in Form von Dokumenten und ist aus verschiedenen GrĂŒnden ineffizient. Wesentliche Inhalte von Forschungsarbeiten sind fĂŒr Computer nicht zugĂ€nglich, da sie in unstrukturierten PDF-Dateien verborgen sind. Daher können derzeitige Forschungsinfrastrukturen Forschende bei ihren Kernaufgaben nicht angemessen unterstĂŒtzen. Diese Arbeit befasst sich mit dieser Problemstellung und untersucht Methoden zur automatischen Extraktion von relevanten Informationen aus Forschungspapieren fĂŒr Forschungswissensgraphen (Research Knowledge Graphs). Solche Graphen sollen wissenschaftliches Wissen maschinenlesbar strukturieren und verknĂŒpfen. ZunĂ€chst wird eine Anforderungsanalyse fĂŒr einen Open Research Knowledge Graph (ORKG) durchgefĂŒhrt. Wir stellen literaturbezogene AnwendungsfĂ€lle von Forschenden vor, die durch ein ORKG-basiertes System unterstĂŒtzt werden sollten, und deren spezifische Anforderungen an die zugrundeliegende Ontologie und die Instanzdaten. Darauf aufbauend werden die identifizierten AnwendungsfĂ€lle in zwei Gruppen eingeteilt: Die erste Gruppe von AnwendungsfĂ€llen benötigt manuelle oder halbautomatische AnsĂ€tze fĂŒr die Konstruktion eines ORKG, da sie eine hohe Korrektheit der Instanzdaten erfordern. Die zweite Gruppe benötigt eine hohe VollstĂ€ndigkeit der Instanzdaten und kann fehlerhafte Daten tolerieren. Daher erfordert diese Gruppe automatische AnsĂ€tze fĂŒr die Konstruktion des ORKG. Diese Arbeit fokussiert sich auf die zweite Gruppe von AnwendungsfĂ€llen und schlĂ€gt Methoden fĂŒr maschinelle Aufgabenstellungen vor, die diese AnwendungsfĂ€lle unterstĂŒtzen können. Um die Relevanz eines Forschungsartikels effizient beurteilen zu können, schauen sich Forschende in der Regel die Titel, Zusammenfassungen, Einleitungen und Schlussfolgerungen an. Durch eine strukturierte Darstellung von wesentlichen Informationen des Artikels könnte dieser Prozess zeitsparender gestaltet werden. Die Aufgabenstellung der sequenziellen Satzklassifikation befasst sich mit diesem Problem, indem SĂ€tze eines Artikels in Kategorien wie Forschungsproblem, verwendete Methoden oder erzielte Ergebnisse automatisch klassifiziert werden. In dieser Arbeit wird fĂŒr diese Aufgabenstellung ein neuer vereinheitlichter Multi-Task Deep-Learning-Ansatz vorgeschlagen, der DatensĂ€tze aus verschiedenen wissenschaftlichen Bereichen (z. B. Biomedizin und Computergrafik) mit unterschiedlichen Strukturen (z. B. DatensĂ€tze bestehend aus Zusammenfassungen oder vollstĂ€ndigen Artikeln) nutzt. Unser Ansatz ĂŒbertrifft State-of-the-Art-Verfahren der Literatur auf Benchmark-DatensĂ€tzen bestehend aus vollstĂ€ndigen Forschungsartikeln. Außerdem ermöglicht unser Ansatz die Klassifizierung von SĂ€tzen auf eine domĂ€nenunabhĂ€ngige Weise. DarĂŒber hinaus stellen wir die neue Aufgabenstellung domĂ€nenĂŒbergreifende Informationsextraktion vor. Hierbei werden, unabhĂ€ngig vom behandelten wissenschaftlichen Fachgebiet, inhaltliche Konzepte aus Forschungspapieren extrahiert. Damit sollen die AnwendungsfĂ€lle Finden von verwandten Arbeiten und Empfehlung von Artikeln unterstĂŒtzt werden. Zu diesem Zweck fĂŒhren wir eine Reihe von generischen wissenschaftlichen Konzepten ein, die in zehn Bereichen der Wissenschaft, Technologie und Medizin (STM) relevant sind, und veröffentlichen einen annotierten Datensatz von 110 Zusammenfassungen aus diesen Bereichen. Da die Annotation wissenschaftlicher Texte aufwĂ€ndig ist, kombinieren wir ein Active-Learning-Verfahren mit einem aktuellen Deep-Learning-Ansatz, um die notwendigen Trainingsdaten zu reduzieren. Die vorgeschlagene Methode ermöglicht es uns, die Menge der erforderlichen Trainingsdaten nahezu zu halbieren. Anschließend erweitern wir unseren domĂ€nenunabhĂ€ngigen Ansatz zur Informationsextraktion um die Aufgabe der Koreferenzauflösung. Die Auflösung von Koreferenzen zielt darauf ab, ErwĂ€hnungen zu identifizieren, die sich auf dasselbe Konzept oder dieselbe EntitĂ€t beziehen. Experimentelle Ergebnisse auf unserem Korpus mit aktuellen AnsĂ€tzen zur Koreferenzauflösung haben gezeigt, dass diese bei wissenschaftlichen Texten unzureichend abschneiden. Daher schlagen wir eine Transfer-Learning-Methode vor, die annotierte DatensĂ€tze aus nicht-akademischen Bereichen nutzt. Die experimentellen Ergebnisse zeigen, dass unser Ansatz deutlich besser abschneidet als die bisherigen AnsĂ€tze. DarĂŒber hinaus untersuchen wir den Einfluss der Koreferenzauflösung auf die Erstellung von Wissensgraphen. Wir zeigen, dass diese einen geringen Einfluss auf die Anzahl der resultierenden Konzepte in dem Wissensgraphen hat, aber die QualitĂ€t des Wissensgraphen deutlich verbessert. Mithilfe unseres domĂ€nenunabhĂ€ngigen Ansatzes zur Informationsextraktion haben wir aus 55.485 Zusammenfassungen der zehn untersuchten STM-DomĂ€nen einen Forschungswissensgraphen erstellt. Unsere Analyse zeigt, dass jede DomĂ€ne hauptsĂ€chlich ihre eigene Terminologie verwendet und dass der erstellte Wissensgraph nĂŒtzliche Konzepte enthĂ€lt. Schließlich schlagen wir einen Ansatz fĂŒr die Empfehlung von passenden Referenzen vor. Damit können Forschende einfacher relevante verwandte Arbeiten finden oder passende Empfehlungen erhalten. Unser Ansatz nutzt Forschungswissensgraphen, die Forschungsarbeiten mit in ihnen erwĂ€hnten wissenschaftlichen Konzepten verknĂŒpfen. Wir zeigen, dass aktuelle Verfahren zur Empfehlung von Referenzen von zusĂ€tzlichen Informationen aus einem automatisch erstellten Wissensgraphen profitieren. Zum Schluss wird ein Fazit gezogen und ein Ausblick fĂŒr mögliche zukĂŒnftige Arbeiten gegeben

    Federated Query Processing over Heterogeneous Data Sources in a Semantic Data Lake

    Get PDF
    Data provides the basis for emerging scientific and interdisciplinary data-centric applications with the potential of improving the quality of life for citizens. Big Data plays an important role in promoting both manufacturing and scientific development through industrial digitization and emerging interdisciplinary research. Open data initiatives have encouraged the publication of Big Data by exploiting the decentralized nature of the Web, allowing for the availability of heterogeneous data generated and maintained by autonomous data providers. Consequently, the growing volume of data consumed by different applications raise the need for effective data integration approaches able to process a large volume of data that is represented in different format, schema and model, which may also include sensitive data, e.g., financial transactions, medical procedures, or personal data. Data Lakes are composed of heterogeneous data sources in their original format, that reduce the overhead of materialized data integration. Query processing over Data Lakes require the semantic description of data collected from heterogeneous data sources. A Data Lake with such semantic annotations is referred to as a Semantic Data Lake. Transforming Big Data into actionable knowledge demands novel and scalable techniques for enabling not only Big Data ingestion and curation to the Semantic Data Lake, but also for efficient large-scale semantic data integration, exploration, and discovery. Federated query processing techniques utilize source descriptions to find relevant data sources and find efficient execution plan that minimize the total execution time and maximize the completeness of answers. Existing federated query processing engines employ a coarse-grained description model where the semantics encoded in data sources are ignored. Such descriptions may lead to the erroneous selection of data sources for a query and unnecessary retrieval of data, affecting thus the performance of query processing engine. In this thesis, we address the problem of federated query processing against heterogeneous data sources in a Semantic Data Lake. First, we tackle the challenge of knowledge representation and propose a novel source description model, RDF Molecule Templates, that describe knowledge available in a Semantic Data Lake. RDF Molecule Templates (RDF-MTs) describes data sources in terms of an abstract description of entities belonging to the same semantic concept. Then, we propose a technique for data source selection and query decomposition, the MULDER approach, and query planning and optimization techniques, Ontario, that exploit the characteristics of heterogeneous data sources described using RDF-MTs and provide a uniform access to heterogeneous data sources. We then address the challenge of enforcing privacy and access control requirements imposed by data providers. We introduce a privacy-aware federated query technique, BOUNCER, able to enforce privacy and access control regulations during query processing over data sources in a Semantic Data Lake. In particular, BOUNCER exploits RDF-MTs based source descriptions in order to express privacy and access control policies as well as their automatic enforcement during source selection, query decomposition, and planning. Furthermore, BOUNCER implements query decomposition and optimization techniques able to identify query plans over data sources that not only contain the relevant entities to answer a query, but also are regulated by policies that allow for accessing these relevant entities. Finally, we tackle the problem of interest based update propagation and co-evolution of data sources. We present a novel approach for interest-based RDF update propagation that consistently maintains a full or partial replication of large datasets and deal with co-evolution

    The development of the Latin gerund in Rhaeto-Romance

    Get PDF
    L gerundie indichea la modalitĂšdes o la condizions de n'azion. L vegn durĂ  demĂČ te la forma prejenta; tel passĂ  se doura de autra construzions perifrastiches. [...] De spes se pel l transformĂšr te n complement indiret de temp, de meso, de modo o te na frasa temporĂšla, ipotetica, e c.i. This is how the Grammar of Fascian Ladin (issued by the Ladin Cultural Institute in Vigo di Fassa) describes this grammatical category. However, there is a discrepancy between the canonical functions of the gerund as usually presented and described in traditional grammars and its actual use. The results of a pilot study conducted at the University of Verona with the help of a limited number of Ladin speakers have highlighted this divergence, where alternative non gerundial constructions, such as an instrumental expression, temporal, causal and similar, are preferred instead of the canonical one. For example, we find Col dormir l se a remetĂč, instead of the expected *Dormian l se a remetĂč. Few studies have recently examined this category, but none of them provide a full overview of the gerundial structures. For instance, Casalicchio 2013 illustrates different types of gerund constructions with perception verbs in Gardenese. He claims that sentences with a subordinate verb use the gerund, instead of infinitive, as in Italian. In his view, the use of the gerund in perceptive constructions may have developed in Vulgar Latin. In this project, I detect the contexts in which the gerund - in terms of its \mbox{etymological} definition - is used, but also which functions this so-called "gerund" has. Such a work is supported by an extensive fieldwork that covers two varieties of Ladin, Friulan and a smaller amount of data from Swiss Romansh. Data was collected via a survey conducted as a written translation task from Italian/German into the speaker’s variety. In addition to this corpus retracing the oral speech, I included a corpus of written texts and a smaller corpus of diachronic data. The analysis provides a larger picture on the development of the Latin gerund in this Alpine area. The final outcome allows us to assess variation and possible contact-induced phenomena in synchrony drawing on the analysis of processes of change in diachrony

    Human Practice. Digital Ecologies. Our Future. : 14. Internationale Tagung Wirtschaftsinformatik (WI 2019) : Tagungsband

    Get PDF
    Erschienen bei: universi - UniversitĂ€tsverlag Siegen. - ISBN: 978-3-96182-063-4Aus dem Inhalt: Track 1: Produktion & Cyber-Physische Systeme Requirements and a Meta Model for Exchanging Additive Manufacturing Capacities Service Systems, Smart Service Systems and Cyber- Physical Systems—What’s the difference? Towards a Unified Terminology Developing an Industrial IoT Platform – Trade-off between Horizontal and Vertical Approaches Machine Learning und Complex Event Processing: Effiziente Echtzeitauswertung am Beispiel Smart Factory Sensor retrofit for a coffee machine as condition monitoring and predictive maintenance use case Stakeholder-Analyse zum Einsatz IIoT-basierter Frischeinformationen in der Lebensmittelindustrie Towards a Framework for Predictive Maintenance Strategies in Mechanical Engineering - A Method-Oriented Literature Analysis Development of a matching platform for the requirement-oriented selection of cyber physical systems for SMEs Track 2: Logistic Analytics An Empirical Study of Customers’ Behavioral Intention to Use Ridepooling Services – An Extension of the Technology Acceptance Model Modeling Delay Propagation and Transmission in Railway Networks What is the impact of company specific adjustments on the acceptance and diffusion of logistic standards? Robust Route Planning in Intermodal Urban Traffic Track 3: Unternehmensmodellierung & Informationssystemgestaltung (Enterprise Modelling & Information Systems Design) Work System Modeling Method with Different Levels of Specificity and Rigor for Different Stakeholder Purposes Resolving Inconsistencies in Declarative Process Models based on Culpability Measurement Strategic Analysis in the Realm of Enterprise Modeling – On the Example of Blockchain-Based Initiatives for the Electricity Sector Zwischenbetriebliche Integration in der Möbelbranche: Konfigurationen und Einflussfaktoren Novices’ Quality Perceptions and the Acceptance of Process Modeling Grammars Entwicklung einer Definition fĂŒr Social Business Objects (SBO) zur Modellierung von Unternehmensinformationen Designing a Reference Model for Digital Product Configurators Terminology for Evolving Design Artifacts Business Role-Object Specification: A Language for Behavior-aware Structural Modeling of Business Objects Generating Smart Glasses-based Information Systems with BPMN4SGA: A BPMN Extension for Smart Glasses Applications Using Blockchain in Peer-to-Peer Carsharing to Build Trust in the Sharing Economy Testing in Big Data: An Architecture Pattern for a Development Environment for Innovative, Integrated and Robust Applications Track 4: Lern- und Wissensmanagement (e-Learning and Knowledge Management) eGovernment Competences revisited – A Literature Review on necessary Competences in a Digitalized Public Sector Say Hello to Your New Automated Tutor – A Structured Literature Review on Pedagogical Conversational Agents Teaching the Digital Transformation of Business Processes: Design of a Simulation Game for Information Systems Education Conceptualizing Immersion for Individual Learning in Virtual Reality Designing a Flipped Classroom Course – a Process Model The Influence of Risk-Taking on Knowledge Exchange and Combination Gamified Feedback durch Avatare im Mobile Learning Alexa, Can You Help Me Solve That Problem? - Understanding the Value of Smart Personal Assistants as Tutors for Complex Problem Tasks Track 5: Data Science & Business Analytics Matching with Bundle Preferences: Tradeoff between Fairness and Truthfulness Applied image recognition: guidelines for using deep learning models in practice Yield Prognosis for the Agrarian Management of Vineyards using Deep Learning for Object Counting Reading Between the Lines of Qualitative Data – How to Detect Hidden Structure Based on Codes Online Auctions with Dual-Threshold Algorithms: An Experimental Study and Practical Evaluation Design Features of Non-Financial Reward Programs for Online Reviews: Evaluation based on Google Maps Data Topic Embeddings – A New Approach to Classify Very Short Documents Based on Predefined Topics Leveraging Unstructured Image Data for Product Quality Improvement Decision Support for Real Estate Investors: Improving Real Estate Valuation with 3D City Models and Points of Interest Knowledge Discovery from CVs: A Topic Modeling Procedure Online Product Descriptions – Boost for your Sales? EntscheidungsunterstĂŒtzung durch historienbasierte Dienstreihenfolgeplanung mit Pattern A Semi-Automated Approach for Generating Online Review Templates Machine Learning goes Measure Management: Leveraging Anomaly Detection and Parts Search to Improve Product-Cost Optimization Bedeutung von Predictive Analytics fĂŒr den theoretischen Erkenntnisgewinn in der IS-Forschung Track 6: Digitale Transformation und Dienstleistungen Heuristic Theorizing in Software Development: Deriving Design Principles for Smart Glasses-based Systems Mirroring E-service for Brick and Mortar Retail: An Assessment and Survey Taxonomy of Digital Platforms: A Platform Architecture Perspective Value of Star Players in the Digital Age Local Shopping Platforms – Harnessing Locational Advantages for the Digital Transformation of Local Retail Outlets: A Content Analysis A Socio-Technical Approach to Manage Analytics-as-a-Service – Results of an Action Design Research Project Characterizing Approaches to Digital Transformation: Development of a Taxonomy of Digital Units Expectations vs. Reality – Benefits of Smart Services in the Field of Tension between Industry and Science Innovation Networks and Digital Innovation: How Organizations Use Innovation Networks in a Digitized Environment Characterising Social Reading Platforms— A Taxonomy-Based Approach to Structure the Field Less Complex than Expected – What Really Drives IT Consulting Value Modularity Canvas – A Framework for Visualizing Potentials of Service Modularity Towards a Conceptualization of Capabilities for Innovating Business Models in the Industrial Internet of Things A Taxonomy of Barriers to Digital Transformation Ambidexterity in Service Innovation Research: A Systematic Literature Review Design and success factors of an online solution for cross-pillar pension information Track 7: IT-Management und -Strategie A Frugal Support Structure for New Software Implementations in SMEs How to Structure a Company-wide Adoption of Big Data Analytics The Changing Roles of Innovation Actors and Organizational Antecedents in the Digital Age Bewertung des Kundennutzens von Chatbots fĂŒr den Einsatz im Servicedesk Understanding the Benefits of Agile Software Development in Regulated Environments Are Employees Following the Rules? On the Effectiveness of IT Consumerization Policies Agile and Attached: The Impact of Agile Practices on Agile Team Members’ Affective Organisational Commitment The Complexity Trap – Limits of IT Flexibility for Supporting Organizational Agility in Decentralized Organizations Platform Openness: A Systematic Literature Review and Avenues for Future Research Competence, Fashion and the Case of Blockchain The Digital Platform Otto.de: A Case Study of Growth, Complexity, and Generativity Track 8: eHealth & alternde Gesellschaft Security and Privacy of Personal Health Records in Cloud Computing Environments – An Experimental Exploration of the Impact of Storage Solutions and Data Breaches Patientenintegration durch Pfadsysteme Digitalisierung in der StressprĂ€vention – eine qualitative Interviewstudie zu Nutzenpotenzialen User Dynamics in Mental Health Forums – A Sentiment Analysis Perspective Intent and the Use of Wearables in the Workplace – A Model Development Understanding Patient Pathways in the Context of Integrated Health Care Services - Implications from a Scoping Review Understanding the Habitual Use of Wearable Activity Trackers On the Fit in Fitness Apps: Studying the Interaction of Motivational Affordances and Users’ Goal Orientations in Affecting the Benefits Gained Gamification in Health Behavior Change Support Systems - A Synthesis of Unintended Side Effects Investigating the Influence of Information Incongruity on Trust-Relations within Trilateral Healthcare Settings Track 9: Krisen- und KontinuitĂ€tsmanagement Potentiale von IKT beim Ausfall kritischer Infrastrukturen: Erwartungen, Informationsgewinnung und Mediennutzung der Zivilbevölkerung in Deutschland Fake News Perception in Germany: A Representative Study of People’s Attitudes and Approaches to Counteract Disinformation Analyzing the Potential of Graphical Building Information for Fire Emergency Responses: Findings from a Controlled Experiment Track 10: Human-Computer Interaction Towards a Taxonomy of Platforms for Conversational Agent Design Measuring Service Encounter Satisfaction with Customer Service Chatbots using Sentiment Analysis Self-Tracking and Gamification: Analyzing the Interplay of Motivations, Usage and Motivation Fulfillment Erfolgsfaktoren von Augmented-Reality-Applikationen: Analyse von Nutzerrezensionen mit dem Review-Mining-Verfahren Designing Dynamic Decision Support for Electronic Requirements Negotiations Who is Stressed by Using ICTs? A Qualitative Comparison Analysis with the Big Five Personality Traits to Understand Technostress Walking the Middle Path: How Medium Trade-Off Exposure Leads to Higher Consumer Satisfaction in Recommender Agents Theory-Based Affordances of Utilitarian, Hedonic and Dual-Purposed Technologies: A Literature Review Eliciting Customer Preferences for Shopping Companion Apps: A Service Quality Approach The Role of Early User Participation in Discovering Software – A Case Study from the Context of Smart Glasses The Fluidity of the Self-Concept as a Framework to Explain the Motivation to Play Video Games Heart over Heels? An Empirical Analysis of the Relationship between Emotions and Review Helpfulness for Experience and Credence Goods Track 11: Information Security and Information Privacy Unfolding Concerns about Augmented Reality Technologies: A Qualitative Analysis of User Perceptions To (Psychologically) Own Data is to Protect Data: How Psychological Ownership Determines Protective Behavior in a Work and Private Context Understanding Data Protection Regulations from a Data Management Perspective: A Capability-Based Approach to EU-GDPR On the Difficulties of Incentivizing Online Privacy through Transparency: A Qualitative Survey of the German Health Insurance Market What is Your Selfie Worth? A Field Study on Individuals’ Valuation of Personal Data Justification of Mass Surveillance: A Quantitative Study An Exploratory Study of Risk Perception for Data Disclosure to a Network of Firms Track 12: Umweltinformatik und nachhaltiges Wirtschaften KommunikationsfĂ€den im Nadelöhr – Fachliche Prozessmodellierung der Nachhaltigkeitskommunikation am Kapitalmarkt Potentiale und Herausforderungen der Materialflusskostenrechnung Computing Incentives for User-Based Relocation in Carsharing Sustainability’s Coming Home: Preliminary Design Principles for the Sustainable Smart District Substitution of hazardous chemical substances using Deep Learning and t-SNE A Hierarchy of DSMLs in Support of Product Life-Cycle Assessment A Survey of Smart Energy Services for Private Households Door-to-Door Mobility Integrators as Keystone Organizations of Smart Ecosystems: Resources and Value Co-Creation – A Literature Review Ein EntscheidungsunterstĂŒtzungssystem zur ökonomischen Bewertung von Mieterstrom auf Basis der Clusteranalyse Discovering Blockchain for Sustainable Product-Service Systems to enhance the Circular Economy Digitale RĂŒckverfolgbarkeit von Lebensmitteln: Eine verbraucherinformatische Studie Umweltbewusstsein durch audiovisuelles Content Marketing? Eine experimentelle Untersuchung zur Konsumentenbewertung nachhaltiger Smartphones Towards Predictive Energy Management in Information Systems: A Research Proposal A Web Browser-Based Application for Processing and Analyzing Material Flow Models using the MFCA Methodology Track 13: Digital Work - Social, mobile, smart On Conversational Agents in Information Systems Research: Analyzing the Past to Guide Future Work The Potential of Augmented Reality for Improving Occupational First Aid Prevent a Vicious Circle! The Role of Organizational IT-Capability in Attracting IT-affine Applicants Good, Bad, or Both? Conceptualization and Measurement of Ambivalent User Attitudes Towards AI A Case Study on Cross-Hierarchical Communication in Digital Work Environments ‘Show Me Your People Skills’ - Employing CEO Branding for Corporate Reputation Management in Social Media A Multiorganisational Study of the Drivers and Barriers of Enterprise Collaboration Systems-Enabled Change The More the Merrier? The Effect of Size of Core Team Subgroups on Success of Open Source Projects The Impact of Anthropomorphic and Functional Chatbot Design Features in Enterprise Collaboration Systems on User Acceptance Digital Feedback for Digital Work? Affordances and Constraints of a Feedback App at InsurCorp The Effect of Marker-less Augmented Reality on Task and Learning Performance Antecedents for Cyberloafing – A Literature Review Internal Crowd Work as a Source of Empowerment - An Empirical Analysis of the Perception of Employees in a Crowdtesting Project Track 14: GeschĂ€ftsmodelle und digitales Unternehmertum Dividing the ICO Jungle: Extracting and Evaluating Design Archetypes Capturing Value from Data: Exploring Factors Influencing Revenue Model Design for Data-Driven Services Understanding the Role of Data for Innovating Business Models: A System Dynamics Perspective Business Model Innovation and Stakeholder: Exploring Mechanisms and Outcomes of Value Creation and Destruction Business Models for Internet of Things Platforms: Empirical Development of a Taxonomy and Archetypes Revitalizing established Industrial Companies: State of the Art and Success Principles of Digital Corporate Incubators When 1+1 is Greater than 2: Concurrence of Additional Digital and Established Business Models within Companies Special Track 1: Student Track Investigating Personalized Price Discrimination of Textile-, Electronics- and General Stores in German Online Retail From Facets to a Universal Definition – An Analysis of IoT Usage in Retail Is the Technostress Creators Inventory Still an Up-To-Date Measurement Instrument? Results of a Large-Scale Interview Study Application of Media Synchronicity Theory to Creative Tasks in Virtual Teams Using the Example of Design Thinking TrustyTweet: An Indicator-based Browser-Plugin to Assist Users in Dealing with Fake News on Twitter Application of Process Mining Techniques to Support Maintenance-Related Objectives How Voice Can Change Customer Satisfaction: A Comparative Analysis between E-Commerce and Voice Commerce Business Process Compliance and Blockchain: How Does the Ethereum Blockchain Address Challenges of Business Process Compliance? Improving Business Model Configuration through a Question-based Approach The Influence of Situational Factors and Gamification on Intrinsic Motivation and Learning Evaluation von ITSM-Tools fĂŒr Integration und Management von Cloud-Diensten am Beispiel von ServiceNow How Software Promotes the Integration of Sustainability in Business Process Management Criteria Catalog for Industrial IoT Platforms from the Perspective of the Machine Tool Industry Special Track 3: Demos & Prototyping Privacy-friendly User Location Tracking with Smart Devices: The BeaT Prototype Application-oriented robotics in nursing homes Augmented Reality for Set-up Processe Mixed Reality for supporting Remote-Meetings Gamification zur Motivationssteigerung von Werkern bei der Betriebsdatenerfassung Automatically Extracting and Analyzing Customer Needs from Twitter: A “Needmining” Prototype GaNEsHA: Opportunities for Sustainable Transportation in Smart Cities TUCANA: A platform for using local processing power of edge devices for building data-driven services Demonstrator zur Beschreibung und Visualisierung einer kritischen Infrastruktur Entwicklung einer alltagsnahen persuasiven App zur Bewegungsmotivation fĂŒr Ă€ltere Nutzerinnen und Nutzer A browser-based modeling tool for studying the learning of conceptual modeling based on a multi-modal data collection approach Exergames & Dementia: An interactive System for People with Dementia and their Care-Network Workshops Workshop Ethics and Morality in Business Informatics (Workshop Ethik und Moral in der Wirtschaftsinformatik – EMoWI’19) Model-Based Compliance in Information Systems - Foundations, Case Description and Data Set of the MobIS-Challenge for Students and Doctoral Candidates Report of the Workshop on Concepts and Methods of Identifying Digital Potentials in Information Management Control of Systemic Risks in Global Networks - A Grand Challenge to Information Systems Research Die Mitarbeiter von morgen - Kompetenzen kĂŒnftiger Mitarbeiter im Bereich Business Analytics Digitaler Konsum: Herausforderungen und Chancen der Verbraucherinformati
    corecore