194 research outputs found

    A framework and tool to manage Cloud Computing service quality

    Get PDF
    Cloud Computing has generated considerable interest in both companies specialized in Information and Communication Technology and business context in general. The Sourcing Capability Maturity Model for service (e-SCM) is a capability model for offshore outsourcing services between clients and providers that offers appropriate strategies to enhance Cloud Computing implementation. It intends to achieve the required quality of service and develop an effective working relationship between clients and providers. Moreover, quality evaluation framework is a framework to control the quality of any product and/or process. It offers a tool support that can generate software artifacts to manage any type of product and service efficiently and effectively. Thus, the aim of this paper was to make this framework and tool support available to manage Cloud Computing service quality between clients and providers by means of e-SCM.Ministerio de Ciencia e InnovaciĂłn TIN2013-46928-C3-3-RJunta de AndalucĂ­a TIC-578

    Measuring Software Process: A Systematic Mapping Study

    Get PDF
    Context: Measurement is essential to reach predictable performance and high capability processes. It provides support for better understanding, evaluation, management, and control of the development process and project, as well as the resulting product. It also enables organizations to improve and predict its process’s performance, which places organizations in better positions to make appropriate decisions. Objective: This study aims to understand the measurement of the software development process, to identify studies, create a classification scheme based on the identified studies, and then to map such studies into the scheme to answer the research questions. Method: Systematic mapping is the selected research methodology for this study. Results: A total of 462 studies are included and classified into four topics with respect to their focus and into three groups based on the publishing date. Five abstractions and 64 attributes were identified, 25 methods/models and 17 contexts were distinguished. Conclusion: capability and performance were the most measured process attributes, while effort and performance were the most measured project attributes. Goal Question Metric and Capability Maturity Model Integration were the main methods and models used in the studies, whereas agile/lean development and small/medium-size enterprise were the most frequently identified research contexts.Ministerio de Economía y Competitividad TIN2013-46928-C3-3-RMinisterio de Economía y Competitividad TIN2016-76956-C3-2- RMinisterio de Economía y Competitividad TIN2015-71938-RED

    Applying graph centrality metrics in visual analytics of scientific standard datasets

    Full text link
    © 2019 by the authors. Graphs are often used to model data with a relational structure and graphs are usually visualised into node-link diagrams for a better understanding of the underlying data. Node-link diagrams represent not only data entries in a graph, but also the relations among the data entries. Further, many graph drawing algorithms and graph centrality metrics have been successfully applied in visual analytics of various graph datasets, yet little attention has been paid to analytics of scientific standard data. This study attempts to adopt graph drawing methods (force-directed algorithms) to visualise scientific standard data and provide information with importance �ranking� based on graph centrality metrics such as Weighted Degree, PageRank, Eigenvector, Betweenness and Closeness factors. The outcomes show that our method can produce clear graph layouts of scientific standard for visual analytics, along with the importance �ranking� factors (represent via node colour, size etc.). Our method may assist users with tracking various relationships while understanding scientific standards with fewer relation issues (missing/wrong connection etc.) through focusing on higher priority standards

    Towards the Quality Improvement of Web Applications by Neuroscience Techniques

    Get PDF
    User-centered design not only requires designers to analyse and anticipate how users are likely to use a Web application, but also to validate their assumptions with regard to user behaviour in real environments. Cognitive neuroscience, for its part, addresses the questions of how psychological functions are produced by neural circuitry. The emergence of powerful new measurement techniques allows neuroscientists and psychologists to address abstract questions such as how human cognition and emotion are mapped to specific neural substrates. This paper focus on the validation of user-centered designs and requirements of Web applications by neuroscience techniques and suggest the use of these techniques to achieve efficient and effectiveness validated designs by real behavior of potential users.Ministerio de Ciencia e InnovaciĂłn TIN2013-46928-C3-3-RJunta de AndalucĂ­a TIC-578

    Quality measurement in agile and rapid software development: A systematic mapping

    Get PDF
    Context: In despite of agile and rapid software development (ARSD) being researched and applied extensively, managing quality requirements (QRs) are still challenging. As ARSD processes produce a large amount of data, measurement has become a strategy to facilitate QR management. Objective: This study aims to survey the literature related to QR management through metrics in ARSD, focusing on: bibliometrics, QR metrics, and quality-related indicators used in quality management. Method: The study design includes the definition of research questions, selection criteria, and snowballing as search strategy. Results: We selected 61 primary studies (2001-2019). Despite a large body of knowledge and standards, there is no consensus regarding QR measurement. Terminology is varying as are the measuring models. However, seemingly different measurement models do contain similarities. Conclusion: The industrial relevance of the primary studies shows that practitioners have a need to improve quality measurement. Our collection of measures and data sources can serve as a starting point for practitioners to include quality measurement into their decision-making processes. Researchers could benefit from the identified similarities to start building a common framework for quality measurement. In addition, this could help researchers identify what quality aspects need more focus, e.g., security and usability with few metrics reported.This work has been funded by the European Union’s Horizon 2020 research and innovation program through the Q-Rapids project (grant no. 732253). This research was also partially supported by the Spanish Ministerio de Economía, Industria y Competitividad through the DOGO4ML project (grant PID2020-117191RB-I00). Silverio Martínez-Fernández worked in Fraunhofer IESE before January 2020.Peer ReviewedPostprint (published version

    SemQuaRE - An extension of the SQuaRE quality model for the evaluation of semantic technologies

    Get PDF
    To correctly evaluate semantic technologies and to obtain results that can be easily integrated, we need to put evaluations under the scope of a unique software quality model. This paper presents SemQuaRE, a quality model for semantic technologies. SemQuaRE is based on the SQuaRE standard and describes a set of quality characteristics specific to semantic technologies and the quality measures that can be used for their measurement. It also provides detailed formulas for the calculation of such measures. The paper shows that SemQuaRE is complete with respect to current evaluation trends and that it has been successfully applied in practice

    Defining an Indicator for Navigation Performance Measurement in VE Based on ISO/IEC15939

    Get PDF
    Navigation is a key factor for immersion and exploration in virtual environment (VE). Nevertheless, measuring navigation performance is not an easy task, especially when analyzing and interpreting heterogeneous results of the measures used. To that end, we propose, in this paper, a new indicator for measuring navigation performance in VE based on ISO/IEC 15939 standard. It allows effective integration of heterogeneous results by retaining its raw values. Also, it provides a new method that offers a comprehensive graphical visualization of the data for interpreting the results. The experimental study had shown the feasibility of this indicator and its contribution to statistical results.Burgundy Franche-Comté counci

    Measurement Framework for Assessing Quality of Big Data (MEGA) in Big Data Pipelines

    Get PDF
    ABSTRACT Measurement Framework for Assessing Quality of Big Data (MEGA) in Big Data Pipelines Dave Bhardwaj Concordia University, 2021 Big Data is used widely in the decision-making process and businesses have seen just how powerful data can be, especially for areas such as advertising and marketing. As institutions begin relying on their Big Data systems to make more informed and strategic business decisions, the importance of the underlying data quality becomes extremely significant. In our research this is accomplished through studying and automating the quality characteristics of Big Data, more specifically, through the V’s of Big Data. In this thesis, our aim is to not only present researchers with useful Big Data quality measurements, but to bridge the gap between theoretical measurement models of Big Data quality characteristics and the application of these metrics to real world Big Data Systems. Therefore, our thesis proposes a framework (The MEGA Framework) that can be applied to Big Data Pipelines in order to facilitate the extraction and interpretation of Big Data V’s measurement indicators. The proposed framework allows the application of Big Data V’s measurements at any phase of the architecture process in order to flag quality anomalies of the underlying data, before they can negatively impact the decision-making process. The theoretical quality measurement models for six of the Big Data V’s, namely Volume, Variety, Velocity, Veracity, Validity, and Vincularity, are currently automated. The novelty of the MEGA approach includes the ability to: i) process both structured and unstructured data, ii) track a variety of quality indicators defined for the V’s, iii) flag datasets that pass a certain quality threshold, and iv) define a general infrastructure for collecting, analyzing, and reporting the V's measurement indicators for trustworthy and meaningful decision-making

    A Cybernetic View on Data Quality Management

    Get PDF
    Corporate data of poor quality can have a negative impact on the performance of business processes and thereby the success of companies. In order to be able to work with data of good quality, data quality requirements must clearly be defined. In doing so, one has to take into account that both the provision of high-quality data and the damage caused by low-quality data brings about considerable costs. As each company’s database is a dynamic system, the paper proposes a cybernetic view on data quality management (DQM). First, the principles of a closed-loop control system are transferred to the field of DQM. After that a meta-model is developed that accounts for the central relations between data quality, business process perfor-mance, and related costs. The meta-model then constitutes the basis of a simulation technique which aims at the explication of assumptions (e.g. on the effect of improving a data architecture) and the support of DQM decision processes
    • …
    corecore