148 research outputs found

    Closing Information Gaps with Need-driven Knowledge Sharing

    Get PDF
    Informationslücken schließen durch bedarfsgetriebenen Wissensaustausch Systeme zum asynchronen Wissensaustausch – wie Intranets, Wikis oder Dateiserver – leiden häufig unter mangelnden Nutzerbeiträgen. Ein Hauptgrund dafür ist, dass Informationsanbieter von Informationsuchenden entkoppelt, und deshalb nur wenig über deren Informationsbedarf gewahr sind. Zentrale Fragen des Wissensmanagements sind daher, welches Wissen besonders wertvoll ist und mit welchen Mitteln Wissensträger dazu motiviert werden können, es zu teilen. Diese Arbeit entwirft dazu den Ansatz des bedarfsgetriebenen Wissensaustauschs (NKS), der aus drei Elementen besteht. Zunächst werden dabei Indikatoren für den Informationsbedarf erhoben – insbesondere Suchanfragen – über deren Aggregation eine fortlaufende Prognose des organisationalen Informationsbedarfs (OIN) abgeleitet wird. Durch den Abgleich mit vorhandenen Informationen in persönlichen und geteilten Informationsräumen werden daraus organisationale Informationslücken (OIG) ermittelt, die auf fehlende Informationen hindeuten. Diese Lücken werden mit Hilfe so genannter Mediationsdienste und Mediationsräume transparent gemacht. Diese helfen Aufmerksamkeit für organisationale Informationsbedürfnisse zu schaffen und den Wissensaustausch zu steuern. Die konkrete Umsetzung von NKS wird durch drei unterschiedliche Anwendungen illustriert, die allesamt auf bewährten Wissensmanagementsystemen aufbauen. Bei der Inversen Suche handelt es sich um ein Werkzeug das Wissensträgern vorschlägt Dokumente aus ihrem persönlichen Informationsraum zu teilen, um damit organisationale Informationslücken zu schließen. Woogle erweitert herkömmliche Wiki-Systeme um Steuerungsinstrumente zur Erkennung und Priorisierung fehlender Informationen, so dass die Weiterentwicklung der Wiki-Inhalte nachfrageorientiert gestaltet werden kann. Auf ähnliche Weise steuert Semantic Need, eine Erweiterung für Semantic MediaWiki, die Erfassung von strukturierten, semantischen Daten basierend auf Informationsbedarf der in Form strukturierter Anfragen vorliegt. Die Umsetzung und Evaluation der drei Werkzeuge zeigt, dass bedarfsgetriebener Wissensaustausch technisch realisierbar ist und eine wichtige Ergänzung für das Wissensmanagement sein kann. Darüber hinaus bietet das Konzept der Mediationsdienste und Mediationsräume einen Rahmen für die Analyse und Gestaltung von Werkzeugen gemäß der NKS-Prinzipien. Schließlich liefert der hier vorstellte Ansatz auch Impulse für die Weiterentwicklung von Internetdiensten und -Infrastrukturen wie der Wikipedia oder dem Semantic Web

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF

    Eight Biennial Report : April 2005 – March 2007

    No full text

    ERP implementation methodologies and frameworks: a literature review

    Get PDF
    Enterprise Resource Planning (ERP) implementation is a complex and vibrant process, one that involves a combination of technological and organizational interactions. Often an ERP implementation project is the single largest IT project that an organization has ever launched and requires a mutual fit of system and organization. Also the concept of an ERP implementation supporting business processes across many different departments is not a generic, rigid and uniform concept and depends on variety of factors. As a result, the issues addressing the ERP implementation process have been one of the major concerns in industry. Therefore ERP implementation receives attention from practitioners and scholars and both, business as well as academic literature is abundant and not always very conclusive or coherent. However, research on ERP systems so far has been mainly focused on diffusion, use and impact issues. Less attention has been given to the methods used during the configuration and the implementation of ERP systems, even though they are commonly used in practice, they still remain largely unexplored and undocumented in Information Systems research. So, the academic relevance of this research is the contribution to the existing body of scientific knowledge. An annotated brief literature review is done in order to evaluate the current state of the existing academic literature. The purpose is to present a systematic overview of relevant ERP implementation methodologies and frameworks as a desire for achieving a better taxonomy of ERP implementation methodologies. This paper is useful to researchers who are interested in ERP implementation methodologies and frameworks. Results will serve as an input for a classification of the existing ERP implementation methodologies and frameworks. Also, this paper aims also at the professional ERP community involved in the process of ERP implementation by promoting a better understanding of ERP implementation methodologies and frameworks, its variety and history

    Improving quality, timeliness and efficacy of data collection and management in population-based surveillance of vital events

    Get PDF
    Electronic data collection (EDC), has become familiar in recent years, and has been quickly adopted in many research fields. It has become commonplace to assume that systems that entail entering data in mobile devices, connected through secure networks to central servers are of higher standard than old paper based data collection systems (PDC). Although the notion that EDC performs better than PDC seems reasonable and is widely accepted, few studies have tried to formally evaluate whether it can improve data quality, and none of these to our knowledge, are in the context of population-based longitudinal surveillance. This thesis project aims to assess the strength of OpenHDS, a system based on EDC, used in the population-based surveillance of vital events via Health and Demographic surveillance systems (HDSS). HDSS are both sources of vital event data and have the potential to support health intervention studies in the areas where they operate. Setting up and running an HDSS is operationally challenging, and a reliable and efficient platform for data collection and management is a basic part of it. There are often major shortcomings in the data collection and management processes in running HDSS, though these have not been extensively documented. Recent technological advances, specifically the use of mobile devices for data collection, and the adoption of OpenHDS software for data management, which makes use of best practices for data management, appear to have the potential to resolve many of these issues. The INDEPTH Network and others have invested substantial resources in the roll-out and support of OpenHDS, and there is anecdotal evidence that this has resulted in improvements, but there is considerable demand for compelling evidence. The Swiss Tropical and Public Health Institute (Swiss TPH) has supported some INDEPTH sites to fully migrate to OpenHDS (Ifakara and Rufiji in Tanzania, Nanoro in Burkina Faso, Manhiça in Mozambique and Cross river in Nigeria) and some are in the migration process (7 sites in Ethiopia: Arba Minch, Butajira, Dabat, Gilgel Gibe, Kersa and Kilite Awlaelo). Some other sites are at different stages of evaluating the possibility of adopting OpenHDS (Navrongo in Ghana, Niakhar in Senegal, Iganga/Mayuge in Uganda, Nouna in Burkina Faso, Birbhum in India etc.) and there is a demand from all of them for evidence of the benefits of adopting this system. Demonstration of the appropriate functioning of the OpenHDS is also highly relevant in the light of recently proposed approaches for comprehensive health and epidemiological surveillance systems. Such systems will need to satisfy requirements in terms of data availability and integration which are considerable higher than in a classical HDSS. This project assesses the benefits of OpenHDS in terms of and how the advances in data collection and management translate into improved data quality and timeliness. It asks whether the system architecture of the novel data management system can be further exploited to enable data integration approaches for near time quality control and near time response triggers. It also considers what are the main challenges in implementing such technologies in a new or an existing HDSS. This entails: • A description of the new system and of a set of conjectured data management best practices. For each of these best practices there is a literature review to assess if there is evidence to support it and if OpenHDS follow these practices, giving evidence of how this can be feasible and implemented in the field in two different real-life scenarios: the setting up of a new HDSS (Rusinga Island, Western Kenya and Majete Malaria Project, southern Malawi); and the migration of existing HDSSs (Ifakara, Tanzania and Nanoro, Burkina Faso) to OpenHDS. (Chapter 1) • Describing a novel approach for data collection and management in health and demographic surveillance designed to address the shortcomings of the traditional approach (OpenHDS) and documenting the usage of this system the establishment of a new HDSS (Rusinga) in Chapter 2 and 3. • Evaluating innovative approaches for quality control measures that are made possible by the novel data system architecture (in particular, use of satellite imagery to assess completeness of populations, using Majete HDSS as an example) in Chapter 4. • Studying the potential benefits of electronic data collection (compared with paper) in terms of quality, timeliness, and costs by comparing both in a contemporaneous comparison of different systems in 8 villages in Nanoro, Burkina Faso and using historical comparisons of data quality (as assessed by iSHARE2) before and after migration to OpenHDS for a range of INDEPTH sites in Chapter 5. A series of analyses were carried out to demonstrate that the OpenHDS data system for HDSSs can be implemented in both existing or newly established sites in low- and middle-income countries, and to test the hypothesis that the system is superior to previous approaches with regard of quality and timeliness of data and running costs of the system. This involved describing the novel approach to data collection and management enabled by OpenHDS, evaluating benefits in terms of quality and timeliness of the data using the OpenHDS mobile electronic data system, and the cost of electronic data collection (OpenHDS) vs. paper. It also involved evaluating the impact on the quality of the data of near-time availability and the potential of the OpenHDS system architecture for data integration for next-generation quality control and surveillance-response applications. This work demonstrates that OpenHDS is a system that manages data in a standard reference format, using rigorous checks on demographic events, adding the flexibility to introduce entire questionnaires, variables that a longitudinal study could require, and that OpenHDS can take over old demographic surveillance systems with this new real-time low-cost paperless technology opportunity to abandon old fashion research systems, that remain in use in developing countries.

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    The Use of Social Media in Enterprises for Communication, Collaboration, and Knowledge Management

    Get PDF
    Der Erfolg von Social Media im Internet hat dazu geführt, dass diese Technologie zunehmend auch in Unternehmen eingesetzt, oder über deren Implementierung nachgedacht wird. Durch die erwartete Verbesserung der Kommunikation und Interaktion zwischen Mitarbeitern auf der einen Seite und des Wissensmanagements auf der anderen Seite er-hoffen sich Entscheidungsträger in Unternehmen einen erheblichen betriebswirtschaftlichen Nutzen. Obwohl es einige Beispiele erfolgreicher Enterprise-Social-Media(ESM)-Implementierungen gibt und mehr als 90% der Fortune 500 Unternehmen ESM eingeführt haben oder dies planen, verfehlen 80% der ESM-Projekte die eingangs definierten Ziele. Während die Entscheidung, die Software einzukaufen, zentral getroffen wird, hängt deren Erfolg von der aktiven Partizipation der Mitarbeiter ab – wie sich anhand der genannten Statistiken zeigt, ist beides nicht zwangsläufig korreliert. Im Gegensatz zu organischem Wachstum, wie es in Social-Media-Anwendungen im Internet in den vergangenen Jahren beobachtet werden konnte (z.B. bei Facebook), ist die Nutzungsrate von internen ESM oft zu gering, um den Fortbestand der Community zu sichern. Es zeigt sich dabei verstärkt, dass passive Roll-Out-Strategien, die darauf vertrauen, dass es ein vergleichbares organisches Wachstum auch bei ESM gibt, zum Scheitern verurteilt sind. Viel-mehr müssen Analysen im Vorhinein das für einen spezifischen Anwendungsbereich geeignete Tool identifizieren, und Strategien entwickelt werden, wie Mitarbeiter für die Interaktion über die neuen Anwendungen gewonnen werden können. Da Ausgaben für Informationstechnologien bei einem geringen Nutzungsgrad nicht zu-rechtfertigen sind, trägt die vorliegende Dissertation in acht Essays dazu bei, verschiedene Facetten der ESM-Nutzung näher beleuchten und so zu einem besseren Verständnis des Themas und damit einhergehend einer effektiveren und effizienteren Implementierung von ESM beitragen. Sowohl die Analyse von Einflussfaktoren auf verschiedene Nutzungstypen von ESM, die Optimierung von Enterprise-Suchalgorithmen als auch die Neuinterpretation von Online-Produkt-Ratings können dabei helfen, die Veränderungen der internen und externen Kommunikation, Kollaboration und des Wissensmanagements, die sich durch den Einsatz von ESM ergeben, besser zu erklären und bedarfs-gerechter einzusetzen. Die theoretischen und praktischen Implikationen, welche sich konkret aus den einzelnen Essays ergeben, werden in den entsprechenden Abschnitten der jeweiligen Papiere erläutert

    On the classification and evaluation of prefetching schemes

    Get PDF
    Abstract available: p. [2

    The semantic database model as a basis for an automated database design tool

    Get PDF
    Bibliography: p.257-80.The automatic database design system is a design aid for network database creation. It obtains a requirements specification from a user and generates a prototype database. This database is compatible with the Data Definition Language of DMS 1100, the database system on the Univac 1108 at the University of Cape Town. The user interface has been constructed in such a way that a computer-naive user can submit a description of his organisation to the system. Thus it constitutes a powerful database design tool, which should greatly alleviate the designer's tasks of communicating with users, and of creating an initial database definition. The requirements are formulated using the semantic database model, and semantic information in this model is incorporated into the database as integrity constraints. A relation scheme is also generated from the specification. As a result of this research, insight has been gained into the advantages and shortcomings of the semantic database model, and some principles for 'good' data models and database design methodologies have emerged
    • …
    corecore