102 research outputs found

    A utility-based model to define the optimal data quality level in IT service offerings

    Get PDF
    In the information age, enterprises base or enrich their core business activities with the provision of informative services. For this reason, organizations are becoming increasingly aware of data quality issues, which concern the evaluation of the ability of a data collection to meet users’ needs. Data quality is a multidimensional and subjective issue, since it is defined by a variety of criteria, whose definition and evaluation is strictly dependent on the context and users involved. Thus, when considering data quality, the users’ perspective should always be considered fundamental. Authors in data quality literature agree that providers should adapt, and consequently improve, their service offerings in order to completely satisfy users’ demands. However, we argue that, in service provisioning, providers are subject to restrictions stemming, for instance, from costs and benefits assessments. Therefore, we identify the need for a conciliation of providers’ and users’ quality targets in defining the optimal data quality level of an informative service. The definition of such equilibrium is a complex issue since each type of user accessing the service may define different utilities regarding the provided information. Considering this scenario, the paper presents a utility-based model of the providers’ and customers’ interests developed on the basis of multi-class offerings. The model is exploited to analyze the optimal service offerings that allow the efficient allocation of quality improvements activities for the provider

    An ontology-driven topic mapping approach to multi-level management of e-learning resources

    Get PDF
    An appropriate use of various pedagogical strategies is fundamental for the effective transfer of knowledge in a flourishing e-learning environment. The resultant information superfluity, however, needs to be tackled for developing sustainable e-learning. This necessitates an effective representation and intelligent access to learning resources. Topic maps address these problems of representation and retrieval of information in a distributed environment. The former aspect is particularly relevant where the subject domain is complex and the later aspect is important where the amount of resources is abundant but not easily accessible. Conversely, effective presentation of learning resources based on various pedagogical strategies along with global capturing and authentication of learning resources are an intrinsic part of effective management of learning resources. Towards fulfilling this objective, this paper proposes a multi-level ontology-driven topic mapping approach to facilitate an effective visualization, classification and global authoring of learning resources in e-learning

    Context-aware Data Quality Assessment for Big Data

    Get PDF
    Big data changed the way in which we collect and analyze data. In particular, the amount of available information is constantly growing and organizations rely more and more on data analysis in order to achieve their competitive ad- vantage. However, such amount of data can create a real value only if combined with quality: good decisions and actions are the results of correct, reliable and complete data. In such a scenario, methods and techniques for the data quality assessment can support the identification of suitable data to process. If in tra- ditional database numerous assessment methods are proposed, in the big data scenario new algorithms have to be designed in order to deal with novel require- ments related to variety, volume and velocity issues. In particular, in this paper we highlight that dealing with heterogeneous sources requires an adaptive ap- proach able to trigger the suitable quality assessment methods on the basis of the data type and context in which data have to be used. Furthermore, we show that in some situations it is not possible to evaluate the quality of the entire dataset due to performance and time constraints. For this reason, we suggest to focus the data quality assessment only on a portion of the dataset and to take into account the consequent loss of accuracy by introducing a confidence factor as a measure of the reliability of the quality assessment procedure. We propose a methodology to build a data quality adapter module which selects the best configuration for the data quality assessment based on the user main require- ments: time minimization, confidence maximization, and budget minimization. Experiments are performed by considering real data gathered from a smart city case study

    Limitations of Weighted Sum Measures for Information Quality

    Get PDF
    In an age dominated by information, information quality (IQ) is one of the most important factors to consider for obtaining competitive advantages. The general approach to the study of IQ has relied heavily on management approaches, IQ frameworks and dimensions. There are many IQ measures proposed, however dimensions in most frameworks are analyzed and assessed independently. Approaches to aggregate values have been discussed, by which foremost research mostly suggests to estimate the overall quality of information by total all weighted dimension scores. In this paper, we review the suitability of this assessment approach. In our research we focus on IQ dependencies and trade-offs and we aim at demonstrating by means of an experiment that IQ dimensions are dependent. Based on our result of dependent IQ dimensions, we discuss implications for IQ improvement. Further research studies can build on our observations

    A capacity and value based model for data architectures adopting integration technologies

    Get PDF
    The paper discusses two concepts that have been associated with various approaches to data and information, namelycapacity and value, focusing on data base architectures, and on two types of technologies diffusely used in integrationprojects, namely data integration, in the area of Enterprise Information Integration, and publish & subscribe, in the area ofEnterprise Application Integration. Furthermore, the paper proposes and discusses a unifying model for information capacityand value, that considers also quality constraints and run time costs of the data base architecture

    Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques and Assurance Actions

    Get PDF
    Crowdsourcing enables one to leverage on the intelligence and wisdom of potentially large groups of individuals toward solving problems. Common problems approached with crowdsourcing are labeling images, translating or transcribing text, providing opinions or ideas, and similar - all tasks that computers are not good at or where they may even fail altogether. The introduction of humans into computations and/or everyday work, however, also poses critical, novel challenges in terms of quality control, as the crowd is typically composed of people with unknown and very diverse abilities, skills, interests, personal objectives and technological resources. This survey studies quality in the context of crowdsourcing along several dimensions, so as to define and characterize it and to understand the current state of the art. Specifically, this survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems. An analysis of how these features are supported by the state of the art further identifies open issues and informs an outlook on hot future research directions.Comment: 40 pages main paper, 5 pages appendi

    Application Driven IT Service Management for Energy Efficiency

    Get PDF
    Considering the ever increasing of information technology usage in our everyday life and the huge concentration of computational resources at remote service centers, energy costs become one of the biggest challenging issues for IT managers. Mechanisms to improve energy efficiency in service centers are divided at different levels which range from single components to the whole facility, considering both equipment and application issues. In this paper we focus on analyzing energy efficiency issues at the application level, focusing on e-business processes. Our approach proposes a new method to evaluate and to apply green adaptations strategies based on the service application characteristics with respect to the business process taking into account non-functional requirements
    • …
    corecore