518 research outputs found

    Designing Traceability into Big Data Systems

    Full text link
    Providing an appropriate level of accessibility and traceability to data or process elements (so-called Items) in large volumes of data, often Cloud-resident, is an essential requirement in the Big Data era. Enterprise-wide data systems need to be designed from the outset to support usage of such Items across the spectrum of business use rather than from any specific application view. The design philosophy advocated in this paper is to drive the design process using a so-called description-driven approach which enriches models with meta-data and description and focuses the design process on Item re-use, thereby promoting traceability. Details are given of the description-driven design of big data systems at CERN, in health informatics and in business process management. Evidence is presented that the approach leads to design simplicity and consequent ease of management thanks to loose typing and the adoption of a unified approach to Item management and usage.Comment: 10 pages; 6 figures in Proceedings of the 5th Annual International Conference on ICT: Big Data, Cloud and Security (ICT-BDCS 2015), Singapore July 2015. arXiv admin note: text overlap with arXiv:1402.5764, arXiv:1402.575

    Object databases as data stores for high energy physics

    Get PDF

    HEP data analysis based on an object database store

    Get PDF

    Development of a network-integrated feature-driven engineering environment

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Nonlinear pricing of information goods

    Get PDF
    This paper analyzes optimal pricing for information goods under incomplete information, when both unlimited-usage (fixed-fee) pricing and usage-based pricing are feasible, and administering usage-based pricing may involve transaction costs. It is shown that offering fixed- fee pricing in addition to a non-linear usage-based pricing scheme is always profit-improving in the presence of any non-zero transaction costs, and there may be markets in which a pure fixed-fee is optimal. This implies that the optimal pricing strategy for information goods is almost never fully revealing. Moreover, it is proved that the optimal usage-based pricing schedule is independent of the value of the fixed- fee, a result that simplifies the simultaneous design of pricing schedules considerably, and provides a simple procedure for determining the optimal combination of fixed-fee and non-linear usage-based pricing. The introduction of fixed-fee pricing is shown to increase both consumer surplus and total surplus. The differential effects of setup costs, fixed transaction costs and variable transaction costs on pricing policy are described. These results suggests a number of managerial guidelines for designing pricing schedules. For instance, in nascent information markets, firms may profit from low fixed-fee penetration pricing, but as these markets mature, the optimal pricing mix should expand to include a wider range of usage-based pricing options. The extent of minimum fees, quantity discounts and adoption levels across the different pricing schemes are characterized, strategic pricing responses to changes in market characteristics are described, and the implications of the paper's results for bundling and vertical differentiation of information goods are discussed.nonlinear pricing, screening, digital goods, information goods, fixed-fee, usage-based pricing, transaction costs

    Handling High-Level Model Changes Using Search Based Software Engineering

    Full text link
    Model-Driven Engineering (MDE) considers models as first-class artifacts during the software lifecycle. The number of available tools, techniques, and approaches for MDE is increasing as its use gains traction in driving quality, and controlling cost in evolution of large software systems. Software models, defined as code abstractions, are iteratively refined, restructured, and evolved. This is due to many reasons such as fixing defects in design, reflecting changes in requirements, and modifying a design to enhance existing features. In this work, we focus on four main problems related to the evolution of software models: 1) the detection of applied model changes, 2) merging parallel evolved models, 3) detection of design defects in merged model, and 4) the recommendation of new changes to fix defects in software models. Regarding the first contribution, a-posteriori multi-objective change detection approach has been proposed for evolved models. The changes are expressed in terms of atomic and composite refactoring operations. The majority of existing approaches detects atomic changes but do not adequately address composite changes which mask atomic operations in intermediate models. For the second contribution, several approaches exist to construct a merged model by incorporating all non-conflicting operations of evolved models. Conflicts arise when the application of one operation disables the applicability of another one. The essence of the problem is to identify and prioritize conflicting operations based on importance and context – a gap in existing approaches. This work proposes a multi-objective formulation of model merging that aims to maximize the number of successfully applied merged operations. For the third and fourth contributions, the majority of existing works focuses on refactoring at source code level, and does not exploit the benefits of software design optimization at model level. However, refactoring at model level is inherently more challenging due to difficulty in assessing the potential impact on structural and behavioral features of the software system. This requires analysis of class and activity diagrams to appraise the overall system quality, feasibility, and inter-diagram consistency. This work focuses on designing, implementing, and evaluating a multi-objective refactoring framework for detection and fixing of design defects in software models.Ph.D.Information Systems Engineering, College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/136077/1/Usman Mansoor Final.pdfDescription of Usman Mansoor Final.pdf : Dissertatio

    Towards Multilingual Programming Environments

    Get PDF
    Software projects consist of different kinds of artifacts: build files, configuration files, markup files, source code in different software languages, and so on. At the same time, however, most integrated development environments (IDEs) are focused on a single (programming) language. Even if a programming environment supports multiple languages (e.g., Eclipse), IDE features such as cross-referencing, refactoring, or debugging, do not often cross language boundaries. What would it mean for programming environment to be truly multilingual? In this short paper we sketch a vision of a system that integrates IDE support across language boundaries. We propose to build this system on a foundation of unified source code models and metaprogramming. Nevertheless, a number of important and hard research questions still need to be addressed

    The construction of a linguistic linked data framework for bilingual lexicographic resources

    Get PDF
    Little-known lexicographic resources can be of tremendous value to users once digitised. By extending the digitisation efforts for a lexicographic resource, converting the human readable digital object to a state that is also machine-readable, structured data can be created that is semantically interoperable, thereby enabling the lexicographic resource to access, and be accessed by, other semantically interoperable resources. The purpose of this study is to formulate a process when converting a lexicographic resource in print form to a machine-readable bilingual lexicographic resource applying linguistic linked data principles, using the English-Xhosa Dictionary for Nurses as a case study. This is accomplished by creating a linked data framework, in which data are expressed in the form of RDF triples and URIs, in a manner which allows for extensibility to a multilingual resource. Click languages with characters not typically represented by the Roman alphabet are also considered. The purpose of this linked data framework is to define each lexical entry as “historically dynamic”, instead of “ontologically static” (Rafferty, 2016:5). For a framework which has instances in constant evolution, focus is thus given to the management of provenance and linked data generation thereof. The output is an implementation framework which provides methodological guidelines for similar language resources in the interdisciplinary field of Library and Information Science
    corecore