547,344 research outputs found

    Accurate DOSY measure of out-of-equilibrium systems by permutated DOSY (p-DOSY)

    Full text link
    NMR spectroscopy is an excellent tool for monitoring in-situ chemical reactions. In particular, DOSY measurement is well suited to characterize transient species by the determination of their sizes. However, here we bring to light a difficulty in the DOSY experiments performed in out-of-equilibrium systems. On such a system, the evolution of the concentration of species interferes with the measurement process, and creates a bias on the diffusion coefficient determination that may lead to erroneous interpretations. We show that a random permutation of the series of gradient strengths used during the DOSY experiment allows to average out this bias. This approach, that we name p-DOSY does not require changes in the the pulse sequences nor in the processing software, and restores completely the full accuracy of the measure. This technique is demonstrated on the monitoring of the anomerization reaction of \alpha- to \beta-glucose.Comment: Revised version - 15 pages, 8 figures program archived at 10.5281/zenodo.1926

    Measuring Modularity in Open Source Code Bases

    Get PDF
    Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers

    HOW TO MODEL SERVICE PRODUCTIVITY FOR DATA ENVELOPMENT ANALYSIS? A META-DESIGN APPROACH

    Get PDF
    The rise of the service economy is increasingly reflected in the IS discipline. Since services depend on a co-creation of value between service providers and customers, productivity measurement needs to account for both points of view. Contrasting this evolution, current productivity management concepts often remain limited to the firm instead of focusing on dyadic relationships. Also, software tools frequently constitute expert systems that are focused on solving an optimization problem based on a linear program, but do not guide users in setting up a suitable productivity model in the first place. To account for this need, we conceptualize a software tool support for setting up productivity models for services. Our concept encompasses an extended Data Envelopment Analysis (DEA) approach as its analytical core, but in addition features various tools that help users to collaboratively define a productivity measurement model. Since the suitability of such a model is contingent on the environment in which it is applied, the proposed concept constitutes a meta-design that is intended to be applicable to a class of productivity management problems. As an outlook we present ideas for further research focusing on the implementation and evaluation of IT artefacts compliant with the proposed meta-design

    A total quality management (TQM) strategic measurement perspective with specific reference to the software industry

    Get PDF
    The dissertation aims to obtain an integrated and comprehensive perspective on measurement issues that play a strategic role in organisations that aim at continuous quality improvement through TQM. The multidimensional definition of quality is proposed to view quality holistically. The definition is dynamic, thus dimensions are subject to evolution. Measurement of the quality dimensions is investigated. The relationship between quality and cost, productivity and profitability respectively is examined. The product quality dimensions are redefined for processes. Measurement is a strategic component ofTQM. Integration of financial measures with supplier-; customer-; performance- and internal process measurement is essential for synergism. Measurement of quality management is an additional strategic quality dimension. Applicable research was integrated. Quantitative structures used successfully in industry to achieve quality improvement is important, thus the quality management maturity grid, cleanroom software engineering, software factories, quality function deployment, benchmarking and the ISO 9000 standards are briefly described. Software Metrics Programs are considered to be an application of a holistic measurement approach to quality. Two practical approaches are identified. A framework for initiating implementation is proposed. Two strategic software measurement issues are reliability and cost estimation. Software reliability measurement and modelling are introduced. A strategic approach to software cost estimation is suggested. The critical role of data collection is emphasized. Different approaches to implement software cost estimation in organisations are proposed. A total installed cost template as the ultimate goal is envisaged. An overview of selected software cost estimation models is provided. Potential research areas are identified. The linearity/nonlinearity nature of the software production function is analysed. The synergy between software cost estimation models and project management techniques is investigated. The quantification aspects of uncertainty in activity durations, pertaining to project scheduling, are discussed. Statistical distributions for activity durations are reviewed and compared. A structural view of criteria determining activity duration distribution selection is provided. Estimation issues are reviewed. The integration of knowledge from dispersed fields leads to new dimensions of interaction. Research and practical experience regarding software metrics and software metrics programs can be successfully applied to address the measurement of strategic indicators in other industries.Business ManagementD. Phil. (Operations Research

    Service-oriented architecture of adaptive, intelligent data acquisition and processing systems for long-pulse fusion experiments

    Get PDF
    The data acquisition systems used in long-pulse fusion experiments need to implement data reduction and pattern recognition algorithms in real time. In order to accomplish these operations, it is essential to employ software tools that allow for hot swap capabilities throughout the temporal evolution of the experiments. This is very important because processing needs are not equal during different phases of the experiment. The intelligent test and measurement system (ITMS) developed by UPM and CIEMAT is an example of a technology for implementing scalable data acquisition and processing systems based on PXI and CompactPCI hardware. In the ITMS platform, a set of software tools allows the user to define the processing algorithms associated with the different experimental phases using state machines driven by software events. These state machines are specified using the State Chart XML (SCXML) language. The software tools are developed using JAVA, JINI, an SCXML engine and several LabVIEW applications. Within this schema, it is possible to execute data acquisition and processing applications in an adaptive way. The power of SCXML semantics and the ability to work with XML user-defined data types allow for very easy programming of the ITMS platform. With this approach, the ITMS platform is a suitable solution for implementing scalable data acquisition and processing systems based on a service-oriented model with the ability to easily implement remote participation applications

    Analysis of limitations and metrology weaknesses of enterprise architecture (EA) measurement solutions & proposal of a COSMIC-based approach to EA measurement

    Get PDF
    The literature on enterprise architecture (EA) posits that EA is of considerable value for organizations. However, while the EA literature documents a number of proposals for EA measurement solutions, there is little evidence-based research supporting their achievements and limitations. This thesis aims at helping the EA community to understand the existing trends in EA measurement research and to recognize the existing gaps, limitations, and weaknesses in EA measurement solutions. Furthermore, this thesis aims to assist the EA community to design EA measurement solutions based on measurement and metrology best practices. The research goal of this thesis is to contribute to the EA body of knowledge by shaping new perspectives for future research avenues in EA measurement research. To achieve the research goal, the following research objectives are defined: 1. To classify the EA measurement solutions into specific categories in order to identify research themes and explain the structure of the research area. 2. To evaluate the EA measurement solutions from a measurement and metrology perspective. 3. To identify the measurement and metrology issues in EA measurement solutions. 4. To propose a novel EA measurement approach based on measurement and metrology guidelines and best practices. To achieve the first objective, this thesis conducts a systematic mapping study (SMS to help understand the state-of-the-art of EA measurement research and classify the research area in order to acquire a general understanding about the existing research trends. To achieve the second and third objectives, this thesis conducts a systematic literature review (SLR) to evaluate the EA measurement solutions from a measurement and metrology perspective, and hence, to reveal the weaknesses of EA measurement solutions and propose relevant solutions to these weaknesses. To perform this evaluation, we develop an evaluation process based on combining both the components of the evolution theory and the concepts of measurement and metrology best practices, such as ISO 15939. To achieve the fourth objective, we propose a mapping between two international standards: • COSMIC - ISO/IEC 19761: a method for measuring the functional size of software. • ArchiMate: a modelling language for EA. This mapping results in proposing a novel EA measurement approach that overcomes the weaknesses and limitations found in the existing EA measurement solutions. The research results demonstrate that: 1. The current publications on EA measurement are trending toward an increased focus on the “enterprise IT architecting” school of thought, lacks the rigorous terminology found in science and engineering and shows limited adoption of knowledge from other disciplines in the proposals of EA measurement solutions. 2. There is a lack of attention to attaining appropriate metrology properties in EA measurement proposals: all EA measurement proposals are characterized with insufficient metrology coverage scoring, theoretical and empirical definitions. 3. The proposed novel EA measurement approach demonstrates that it is handy for EA practitioners, and easy to adopt by organizations

    Proactive Quality Guidance for Model Evolution in Model Libraries

    Get PDF
    Model evolution in model libraries differs from general model evolution. It limits the scope to the manageable and allows to develop clear concepts, approaches, solutions, and methodologies. Looking at model quality in evolving model libraries, we focus on quality concerns related to reusability. In this paper, we put forward our proactive quality guidance approach for model evolution in model libraries. It uses an editing-time assessment linked to a lightweight quality model, corresponding metrics, and simplified reviews. All of which help to guide model evolution by means of quality gates fostering model reusability.Comment: 10 pages, figures. Appears in Models and Evolution Workshop Proceedings of the ACM/IEEE 16th International Conference on Model Driven Engineering Languages and Systems, Miami, Florida (USA), September 30, 201

    Agent-based simulation of open source evolution

    Get PDF
    We present an agent-based simulation model developed to study how size, complexity and effort relate to each other in the development of open source software (OSS). In the model, many developer agents generate, extend, and re-factor code modules independently and in parallel. This accords with empirical observations of OSS development. To our knowledge, this is the first model of OSS evolution that includes the complexity of software modules as a limiting factor in productivity, the fitness of the software to its requirements, and the motivation of developers. Validation of the model was done by comparing the simulated results against four measures of software evolution (system size, proportion of highly complex modules, level of complexity control work, and distribution of changes) for four large OSS systems. The simulated results resembled the observed data, except for system size: three of the OSS systems showed alternating patterns of super-linear and sub-linear growth, while the simulations produced only super-linear growth. However, the fidelity of the model for the other measures suggests that developer motivation and the limiting effect of complexity on productivity have a significant effect on the development of OSS systems and should be considered in any model of OSS development
    • …
    corecore