253 research outputs found

    A Review of Trusted Broker Architectures for Data Sharing

    Get PDF
    Sharing data across organizational boundaries must strike a balance between the competing data quality dimensions of access and security. Without access to data, it can't be used and, consequently, is of no value. At the same time, uncontrolled access to data, especially sensitive personal data, can result in dire legal and ethical consequences. This paper discusses the trade-offs between security and access for three styles of trusted broker architectures in hopes that this will provide guidance for organizations trying to implement data sharing systems.Naval Postgraduate School Acquisition Research Progra

    Reconsidering Learning Communities: Expanding the Discourse by Challenging the Discourse

    Get PDF
    This article draws on historical and philosophical lenses and interviews with students to question some fundamental tenets underlying the practice of freshman learning communities (FLCs): that they develop community and improve students\u27 learning experiences. The article brings to the discourse of FLCs some critical questions regarding their value and practice

    Methods to Measure Importance of Data Attributes to Consumers of Information Products

    Get PDF
    Errors in data sources of information product (IP) manufacturing systems can degrade overall IP quality as perceived by consumers. Data defects from inputs propagate throughout the IP manufacturing process. Information Quality (IQ) research has focused on improving the quality of inputs to mitigate error propagation and ensure an IP will be fit for use by consumers. However, the feedback loop from IP consumers to IP producers is often incomplete since the overall quality of the IP is not based solely on quality of inputs but rather by the IP’s fitness for use as a whole. It remains uncertain that high quality inputs directly correlate to a high quality IP. The methods proposed in this paper investigate the effects of intentionally decreasing, or disrupting, quality of inputs, measuring the consumers\u27 evaluations as compared to an undisrupted IP, and proposes scenarios illustrating the advantage of these methods over traditional survey methods. Fitness for use may then be increased using those attributes deemed “important” by consumers in future IP revisions

    CoDoSA: A Lightweight, XML-Based Framework for Integrating Unstructured Textual Information

    Get PDF
    One of the most fundamental dimensions of information quality is access. For many organizations, a large part of their information assets is locked away in Unstructured Textual Information (UTI) in the form of email, letters, contracts, call notes, and spreadsheet. In addition to internal UTI, there is also a wealth of publicly available UTI on websites, in newspapers, courthouse records and other sources that can add value when combined with internally managed information. This paper describes a system called Compressed Document Set Architecture (CoDoSA) designed to facilitate the integration of UTI into a structured database environment where it can be more readily accessed and manipulated. The CoDoSA Framework comprises an XML-based metadata standard and an associated Application Program Interface (API). It further describes how CoDoSA can facilitate the storage and management of information during the ETL (Extract, Transform, and Load) process to integrate unstructured UTI information. It also explains how CoDoSA promotes higher information quality by providing several features that simplify the governance of metadata standards and enforcement of data quality constraints across different UTI applications and development teams. In addition, CoDoSA provides a mechanism for inserting semantic tags into captured UTI, tags that can be used in later steps to drive semantic-mediated queries and processes

    Critical Cultural Success Factors for Achieving High Quality Information in an Organization

    Get PDF
    While information and data quality practitioners are in general agreement that social, cultural, and organizational factors are the most important in determining the success or failure of an organization’s data quality programs, there is little to no existing research quantifying these factors. In this research we build from both our previous research and others’ to distill and clarify those cultural factors which are the Critical Cultural Success Factors (CCSFs) for successful Information and Data Quality programs in an organization. Using the Delphi method for gaining consensus from a group of experts, we distilled fourteen factors down to six and clarified the definitions of those six factors. We begin explaining how these CCSFs fit into Organizational Learning Theory and plan to ultimately define a new system dynamics model incorporating them so that organizations and information quality practitioners can positively affect the success of information and data quality programs

    An Empirical Study of Extreme Programming

    Get PDF
    Extreme Programming (XP) is a drastic departure from the traditional software development processes in which a complete planning cycle usually proceeds any design and implementation work. We report empirical study results from two object-oriented systems, which were developed using a process similar to XP. In particular, we used two metrics¾ System Design Instability (SDI) and Class Implementation Instability (CII) ¾to track the design evolution. We found that both systems experienced a significant increase in classes in the middle of the process. The new stories introduced at the beginning of each cycle may change existing design unpredictably. The CII metric seems to give good indication of project completeness

    Software Metrics and Object-Oriented System Evolution

    Get PDF
    Object Oriented (OO) system design process can be quantitatively measured by metrics. Results from an empirical study of two OO systems that employed some of these metrics are discussed

    An Empirical Study of Java Design Efficiency in a Client-Sever Database System

    Get PDF
    This study shows performance comparisons among four Java design architectures in a client-server database system. We have found that among the four designs, native JDBCODBC bridge, servlet, XML parser and serialized objects, the last one is the most efficient in terms of response time. The four architectures are provided as Java source code for reference

    An Empirical Study of Java Design Efficiency in a Client-Sever Database System

    Get PDF
    This study shows performance comparisons among four Java design architectures in a client-server database system. We have found that among the four designs, native JDBCODBC bridge, servlet, XML parser and serialized objects, the last one is the most efficient in terms of response time. The four architectures are provided as Java source code for reference
    corecore