212 research outputs found

    The Influence of Green Strategies Design onto Quality Requirements Prioritization

    Get PDF
    [Context and Motivation] Modern society is facing important challenges that are critical to improve its environmental performance. The literature reports on many green strategies aimed at reducing energy consumption. However, little research has been carried out so far on including green strategies in software design. [Question/problem] In this paper, we investigate how green software strategies can contribute to, and influence, quality requirements prioritization performed iteratively throughout a service-oriented software design process. [Methodology] In collaboration with a Dutch industry partner, an empirical study was carried out with 19 student teams playing the role of software designers, who completed the design of a real-life project through 7 weekly deliverables. [Principle ideas/results] We identified a list of quality requirements (QRs) that were considered by the teams as part of their architectural decisions when green strategies were introduced. By analyzing relations between QRs and green strategies, our study confirms usability as the most used QR for addressing green strategies that allow to create people awareness. Qualities like reliability, performance, interoperability, scalability and availability emerged as the most relevant for addressing service-awareness green strategies. [Contribution] If used at the beginning of a green software project, our results help including the most relevant QRs for addressing those green software strategies that are e.g. the most domain-generic (like increase carbon footprint awareness, paperless service provisioning, virtualization)

    Prediction, Recommendation and Group Analytics Models in the domain of Mashup Services and Cyber-Argumentation Platform

    Get PDF
    Mashup application development is becoming a widespread software development practice due to its appeal for a shorter application development period. Application developers usually use web APIs from different sources to create a new streamlined service and provide various features to end-users. This kind of practice saves time, ensures reliability, accuracy, and security in the developed applications. Mashup application developers integrate these available APIs into their applications. Still, they have to go through thousands of available web APIs and chose only a few appropriate ones for their application. Recommending relevant web APIs might help application developers in this situation. However, very low API invocation from mashup applications creates a sparse mashup-web API dataset for the recommendation models to learn about the mashups and their web API invocation pattern. One research aims to analyze these mashup-specific critical issues, look for supplemental information in the mashup domain, and develop web API recommendation models for mashup applications. The developed recommendation model generates useful and accurate web APIs to reduce the impact of low API invocations in mashup application development. Cyber-Argumentation platform also faces a similarly challenging issue. In large-scale cyber argumentation platforms, participants express their opinions, engage with one another, and respond to feedback and criticism from others in discussing important issues online. Argumentation analysis tools capture the collective intelligence of the participants and reveal hidden insights from the underlying discussions. However, such analysis requires that the issues have been thoroughly discussed and participant’s opinions are clearly expressed and understood. Participants typically focus only on a few ideas and leave others unacknowledged and underdiscussed. This generates a limited dataset to work with, resulting in an incomplete analysis of issues in the discussion. One solution to this problem would be to develop an opinion prediction model for cyber-argumentation. This model would predict participant’s opinions on different ideas that they have not explicitly engaged. In cyber-argumentation, individuals interact with each other without any group coordination. However, the implicit group interaction can impact the participating user\u27s opinion, attitude, and discussion outcome. One of the objectives of this research work is to analyze different group analytics in the cyber-argumentation environment. The objective is to design an experiment to inspect whether the critical concepts of the Social Identity Model of Deindividuation Effects (SIDE) are valid in our argumentation platform. This experiment can help us understand whether anonymity and group sense impact user\u27s behavior in our platform. Another section is about developing group interaction models to help us understand different aspects of group interactions in the cyber-argumentation platform. These research works can help develop web API recommendation models tailored for mashup-specific domains and opinion prediction models for the cyber-argumentation specific area. Primarily these models utilize domain-specific knowledge and integrate them with traditional prediction and recommendation approaches. Our work on group analytic can be seen as the initial steps to understand these group interactions

    A small-step approach to multi-trace checking against interactions

    Full text link
    Interaction models describe the exchange of messages between the different components of distributed systems. We have previously defined a small-step operational semantics for interaction models. The paper extends this work by presenting an approach for checking the validity of multi-traces against interaction models. A multi-trace is a collection of traces (sequences of emissions and receptions), each representing a local view of the same global execution of the distributed system. We have formally proven our approach, studied its complexity, and implemented it in a prototype tool. Finally, we discuss some observability issues when testing distributed systems via the analysis of multi-traces.Comment: long version - 26 pages (23 for paper, 2 for bibliography, and a 1 page annex) - 15 figures (1 in annex

    Querying and managing opm-compliant scientific workflow provenance

    Get PDF
    Provenance, the metadata that records the derivation history of scientific results, is important in scientific workflows to interpret, validate, and analyze the result of scientific computing. Recently, to promote and facilitate interoperability among heterogeneous provenance systems, the Open Provenance Model (OPM) has been proposed and has played an important role in the community. In this dissertation, to efficiently query and manage OPM-compliant provenance, we first propose a provenance collection framework that collects both prospective provenance, which captures an abstract workflow specification as a recipe for future data derivation and retrospective provenance, which captures past workflow execution and data derivation information. We then propose a relational database-based provenance system, called OPMPROV that stores, reasons, and queries prospective and retrospective provenance, which is OPM-compliant provenance. We finally propose OPQL, an OPM-level provenance query language, that is directly defined over the OPM model. An OPQL query takes an OPM graph as input and produces an OPM graph as output; therefore, OPQL queries are not tightly coupled to the underlying provenance storage strategies. Our provenance store, provenance collection framework, and provenance query language feature the native support of the OPM model

    On-line real-time service allocation and scheduling for distributed data centers

    Get PDF
    Abstract-With the prosperity of Cluster Computing, Cloud Computing, Grid Computing, and other distributed high performance computing systems, Internet service requests become more and more diverse. The large variety of services plus different Quality of Service (QoS) considerations make it challenging to design effective allocate and scheduling algorithms to satisfy the overall service requirements, especially for distributed systems. In addition, energy consumption issue attracts more and more concerns. In this paper, we study a new energy efficient, profit and penalty aware allocation and scheduling approach for distributed data centers in a multi-electricity-market environment. Our approach efficiently manages computing resources to minimize the processing and transferring energy dollar cost in an electricity price varying environment. Our extensive experimental results show the new approach can significantly cut down the energy consumption dollar cost and achieve higher system's retained profit

    Automatic Identification of Addresses: A Systematic Literature Review

    Get PDF
    Cruz, P., Vanneschi, L., Painho, M., & Rita, P. (2022). Automatic Identification of Addresses: A Systematic Literature Review. ISPRS International Journal of Geo-Information, 11(1), 1-27. https://doi.org/10.3390/ijgi11010011 -----------------------------------------------------------------------The work by Leonardo Vanneschi, Marco Painho and Paulo Rita was supported by Fundação para a Ciência e a Tecnologia (FCT) within the Project: UIDB/04152/2020—Centro de Investigação em Gestão de Informação (MagIC). The work by Prof. Leonardo Vanneschi was also partially supported by FCT, Portugal, through funding of project AICE (DSAIPA/DS/0113/2019).Address matching continues to play a central role at various levels, through geocoding and data integration from different sources, with a view to promote activities such as urban planning, location-based services, and the construction of databases like those used in census operations. However, the task of address matching continues to face several challenges, such as non-standard or incomplete address records or addresses written in more complex languages. In order to better understand how current limitations can be overcome, this paper conducted a systematic literature review focused on automated approaches to address matching and their evolution across time. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed, resulting in a final set of 41 papers published between 2002 and 2021, the great majority of which are after 2017, with Chinese authors leading the way. The main findings revealed a consistent move from more traditional approaches to deep learning methods based on semantics, encoder-decoder architectures, and attention mechanisms, as well as the very recent adoption of hybrid approaches making an increased use of spatial constraints and entities. The adoption of evolutionary-based approaches and privacy preserving methods stand as some of the research gaps to address in future studies.publishersversionpublishe

    A Service-Oriented Approach for Network-Centric Data Integration and Its Application to Maritime Surveillance

    Get PDF
    Maritime-surveillance operators still demand for an integrated maritime picture better supporting international coordination for their operations, as looked for in the European area. In this area, many data-integration efforts have been interpreted in the past as the problem of designing, building and maintaining huge centralized repositories. Current research activities are instead leveraging service-oriented principles to achieve more flexible and network-centric solutions to systems and data integration. In this direction, this article reports on the design of a SOA platform, the Service and Application Integration (SAI) system, targeting novel approaches for legacy data and systems integration in the maritime surveillance domain. We have developed a proof-of-concept of the main system capabilities to assess feasibility of our approach and to evaluate how the SAI middleware architecture can fit application requirements for dynamic data search, aggregation and delivery in the distributed maritime domain
    corecore