5,253 research outputs found

    On Evaluating Commercial Cloud Services: A Systematic Review

    Full text link
    Background: Cloud Computing is increasingly booming in industry with many competing providers and services. Accordingly, evaluation of commercial Cloud services is necessary. However, the existing evaluation studies are relatively chaotic. There exists tremendous confusion and gap between practices and theory about Cloud services evaluation. Aim: To facilitate relieving the aforementioned chaos, this work aims to synthesize the existing evaluation implementations to outline the state-of-the-practice and also identify research opportunities in Cloud services evaluation. Method: Based on a conceptual evaluation model comprising six steps, the Systematic Literature Review (SLR) method was employed to collect relevant evidence to investigate the Cloud services evaluation step by step. Results: This SLR identified 82 relevant evaluation studies. The overall data collected from these studies essentially represent the current practical landscape of implementing Cloud services evaluation, and in turn can be reused to facilitate future evaluation work. Conclusions: Evaluation of commercial Cloud services has become a world-wide research topic. Some of the findings of this SLR identify several research gaps in the area of Cloud services evaluation (e.g., the Elasticity and Security evaluation of commercial Cloud services could be a long-term challenge), while some other findings suggest the trend of applying commercial Cloud services (e.g., compared with PaaS, IaaS seems more suitable for customers and is particularly important in industry). This SLR study itself also confirms some previous experiences and reveals new Evidence-Based Software Engineering (EBSE) lessons

    Colour-based image retrieval algorithms based on compact colour descriptors and dominant colour-based indexing methods

    Get PDF
    Content based image retrieval (CBIR) is reported as one of the most active research areas in the last two decades, but it is still young. Three CBIR’s performance problem in this study is inaccuracy of image retrieval, high complexity of feature extraction, and degradation of image retrieval after database indexing. This situation led to discrepancies to be applied on limited-resources devices (such as mobile devices). Therefore, the main objective of this thesis is to improve performance of CBIR. Images’ Dominant Colours (DCs) is selected as the key contributor for this purpose due to its compact property and its compatibility with the human visual system. Semantic image retrieval is proposed to solve retrieval inaccuracy problem by concentrating on the images’ objects. The effect of image background is reduced to provide more focus on the object by setting weights to the object and the background DCs. The accuracy improvement ratio is raised up to 50% over the compared methods. Weighting DCs framework is proposed to generalize this technique where it is demonstrated by applying it on many colour descriptors. For reducing high complexity of colour Correlogram in terms of computations and memory space, compact representation of Correlogram is proposed. Additionally, similarity measure of an existing DC-based Correlogram is adapted to improve its accuracy. Both methods are incorporated to produce promising colour descriptor in terms of time and memory space complexity. As a result, the accuracy is increased up to 30% over the existing methods and the memory space is decreased to less than 10% of its original space. Converting the abundance of colours into a few DCs framework is proposed to generalize DCs concept. In addition, two DC-based indexing techniques are proposed to overcome time problem, by using RGB and perceptual LUV colour spaces. Both methods reduce the search space to less than 25% of the database size with preserving the same accuracy

    Achievements, open problems and challenges for search based software testing

    Get PDF
    Search Based Software Testing (SBST) formulates testing as an optimisation problem, which can be attacked using computational search techniques from the field of Search Based Software Engineering (SBSE). We present an analysis of the SBST research agenda, focusing on the open problems and challenges of testing non-functional properties, in particular a topic we call 'Search Based Energy Testing' (SBET), Multi-objective SBST and SBST for Test Strategy Identification. We conclude with a vision of FIFIVERIFY tools, which would automatically find faults, fix them and verify the fixes. We explain why we think such FIFIVERIFY tools constitute an exciting challenge for the SBSE community that already could be within its reach

    A systematic review of data quality issues in knowledge discovery tasks

    Get PDF
    Hay un gran crecimiento en el volumen de datos porque las organizaciones capturan permanentemente la cantidad colectiva de datos para lograr un mejor proceso de toma de decisiones. El desafío mas fundamental es la exploración de los grandes volúmenes de datos y la extracción de conocimiento útil para futuras acciones por medio de tareas para el descubrimiento del conocimiento; sin embargo, muchos datos presentan mala calidad. Presentamos una revisión sistemática de los asuntos de calidad de datos en las áreas del descubrimiento de conocimiento y un estudio de caso aplicado a la enfermedad agrícola conocida como la roya del café.Large volume of data is growing because the organizations are continuously capturing the collective amount of data for better decision-making process. The most fundamental challenge is to explore the large volumes of data and extract useful knowledge for future actions through knowledge discovery tasks, nevertheless many data has poor quality. We presented a systematic review of the data quality issues in knowledge discovery tasks and a case study applied to agricultural disease named coffee rust

    A Multi-Process Quality Model: Identification of Key Processes in the Integration Approach

    Get PDF
    Abstract—In this paper we investigate the use of multiprocessquality model in the adoption of process improvementframeworks. We analyze an improvement effort based onmultiple process quality models adoption. At present, there is apossibility of a software development organization to adoptmulti-quality and improvement models in order to remaincompetitive in the IT market place. Various quality modelsemerge to satisfy different improvement objective such as toimprove capability of models, quality management and serve asIT government purpose. The heterogeneity characteristics ofthe models require further research on dealing with multipleprocess models at a time. We discuss on the concept of softwareprocess and overview on software maintenance and evolutionwhich are important elements in the quality models. Theconcepts related to process quality model and improvementmodels are discussed. The research outlined in this paper showsthat software processes, maintenance, evolution, quality andimprovement have become really important in softwareengineering. The synergy among the multi-focused processquality model is examined with respect to processimprovement. The research outcome is to determine keyprocesses vital to the implementation of multi-process qualitymodel

    Методология Оценки Качества Веб-Сайта Универсальная Звезда: первая вершина – «Содержание»

    Get PDF
    The Internet continues to grow at a fast pace with over 1.5 billion websites in 2019 as compared with only one in 1991. The emergence of enormous websites of various complexities and types makes assessing the quality of these sites a vastly important, difficult and complicated task. With this concern, the current paper proposes a novel approach for website assessment by developing a new Website Quality Evaluation Methodology Universal Star (WQEMUS) with a theoretical and empirical basis. It became possible through the employment of the  grounded  theory  methodology  that  enables  relevant  concepts  to  emerge  from  data. To improve the reliability and validity of the findings, an extensive literature review, in-depth and qualitative interviews, and a user evaluation survey were conducted and associated together. In this way, the study presents the results of the selection and categorization of generic quality attributes for WQEMUS with a three-tier structure, consisting of top-level quality criteria, sub-criteria and indicators. These quality dimensions are grounded on a combination of subjective and objective indicators. Consequently, WQEMUS becomes capable of estimating a wide range of different websites irrespective of domain affiliation and services they provide, including Web 3.0 sites.Интернет продолжает расти быстрыми темпами, более чем 1,5 млрд веб-сайтов в2019 г. по сравнению только с одним в1991 г. Появление огромных веб-сайтов различной сложности и типов делает оценку качества этих сайтов чрезвычайно важной и трудной задачей. В связи с этим в статье представлен новый подход к оценке веб-сайтов путем разработки новой Методологии Оценки Качества Веб-Сайтов Универсальная Звезда (МОКВУЗ) на теоретической и эмпирической основе. Чтобы повысить надежность и достоверность результатов исследования, были приведены обширный обзор литературы, углубленные и качественные интервью и оценки пользователей. Таким образом, в статье представлены результаты отбора и  категоризации общих  атрибутов качества для МОКВУЗ с трехуровневой структурой, состоящей из критериев качества высшего уровня, субкритериев и показателей. Эти аспекты качества основаны на сочетании субъективных и объективных показателей. Следовательно, МОКВУЗ становится способной оценивать широкий спектр различных веб-сайтов независимо от принадлежности к домену и предоставляемых ими услуг, включая сайты Веб 3.0

    An adaptive trust based service quality monitoring mechanism for cloud computing

    Get PDF
    Cloud computing is the newest paradigm in distributed computing that delivers computing resources over the Internet as services. Due to the attractiveness of cloud computing, the market is currently flooded with many service providers. This has necessitated the customers to identify the right one meeting their requirements in terms of service quality. The existing monitoring of service quality has been limited only to quantification in cloud computing. On the other hand, the continuous improvement and distribution of service quality scores have been implemented in other distributed computing paradigms but not specifically for cloud computing. This research investigates the methods and proposes mechanisms for quantifying and ranking the service quality of service providers. The solution proposed in this thesis consists of three mechanisms, namely service quality modeling mechanism, adaptive trust computing mechanism and trust distribution mechanism for cloud computing. The Design Research Methodology (DRM) has been modified by adding phases, means and methods, and probable outcomes. This modified DRM is used throughout this study. The mechanisms were developed and tested gradually until the expected outcome has been achieved. A comprehensive set of experiments were carried out in a simulated environment to validate their effectiveness. The evaluation has been carried out by comparing their performance against the combined trust model and QoS trust model for cloud computing along with the adapted fuzzy theory based trust computing mechanism and super-agent based trust distribution mechanism, which were developed for other distributed systems. The results show that the mechanisms are faster and more stable than the existing solutions in terms of reaching the final trust scores on all three parameters tested. The results presented in this thesis are significant in terms of making cloud computing acceptable to users in verifying the performance of the service providers before making the selection

    Measuring the Quality of Data Models: An Empirical Evaluation of the Use of Quality Metrics in Practice

    Get PDF
    This paper describes the empirical evaluation of a set of proposed metrics for evaluating the quality of data models. A total of twenty nine candidate metrics were originally proposed, each of which measured a different aspect of quality of a data model. Action research was used to evaluate the usefulness of the metrics in five application development projects in two private sector organisations. Of the metrics originally proposed, only three “survived” the empirical validation process, and two new metrics were discovered. The result was a set of five metrics which participants felt were manageable to apply in practice. An unexpected finding was that subjective ratings of quality and qualitative descriptions of quality issues were perceived to be much more useful than the metrics. While the idea of using metrics to quantify the quality of data models seems good in theory, the results of this study seem to indicate that it is not quite so useful in practice. The conclusion is that using a combination of “hard” and “soft” information (metrics, subjective ratings, qualitative description of issues) provides the most effective solution to the problem of evaluating the quality of data models, and that moves towards increased quantification may be counterproductive
    corecore