1,100 research outputs found

    Quality measures for ETL processes: from goals to implementation

    Get PDF
    Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft

    Business process management and digital innovations : a systematic literature review

    Get PDF
    Emerging technologies have capabilities to reshape business process management (BPM) from its traditional version to a more explorative variant. However, to exploit the full benefits of new IT, it is essential to reveal BPM’s research potential and to detect recent trends in practice. Therefore, this work presents a systematic literature review (SLR) with 231 recent academic articles (from 2014 until May 2019) that integrate BPM with digital innovations (DI). We position those articles against seven future BPM-DI trends that were inductively derived from an expert panel. By complementing the expected trends in practice with a state-of-the-art literature review, we are able to derive covered and uncovered themes in order to help bridge a rigor-relevance gap. The major technological impacts within the BPM field seem to focus on value creation, customer engagement and managing human-centric and knowledge-intensive business processes. Finally, our findings are categorized into specific calls for research and for action to let scholars and organizations better prepare for future digital needs

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Envisioning Model-Based Performance Engineering Frameworks.

    Get PDF
    Abstract Our daily activities depend on complex software systems that must guarantee certain performance. Several approaches have been devised in the last decade to validate software systems against performance requirements. However, software designers still encounter problems in the interpretation of performance analysis results (e.g., mean values, probability distribution functions) and in the definition of design alternatives (e.g., to split a software component in two and redeploy one of them) aimed at fulfilling performance requirements. This paper describes a general model-based performance engineering framework to support designers in dealing with such problems aimed at enhancing the system. The framework relies on a formalization of the knowledge needed in order to characterize performance flaws and provide alternative system design. Such knowledge can be instantiated based on the techniques devised for interpreting performance analysis results and providing feedback to designers. Three techniques are considered in this paper for instantiating the framework and the main challenges to face during such process are pointed out and discussed

    When the worst-case execution time estimation gains from the application semantics

    Get PDF
    International audienceCritical embedded systems are generally composed of repetitive tasks that must meet drastic timing constraints, such as termination deadlines. Providing an upper bound of the worst-case execution time (WCET) of such tasks at design time is thus necessary to prove the correctness of the system. Static timing analysis methods compute safe WCET upper bounds, but at the cost of a potentially large over-approximation. Over-approximation may come from the fact that WCET analysis may consider as potential worst-cases some executions that are actually infeasible, because of the semantics of the program and/or because they correspond to unrealistic inputs. In this paper, we introduce a complete semantic-aware WCET estimation workflow. We introduce some program analysis to find infeasible paths: they can be performed at design, C or binary level, and may take into account information provided by the user. We design an annotation-aware compilation process that enables to trace the infeasible path properties through the program transformations performed by the compilers. Finally, we adapt the WCET estimation tool to take into account the kind of annotations produced by the workflow

    A model-based runtime environment for adapting communication systems

    Full text link
    With increasing network sizes, mobility, and traffic, it becomes a challenging task to achieve goals such as continuously delivering a satisfying service quality. Self-adaptive approaches use feedback loops to adapt a managed resource at runtime according to changes in the execution context. Adding self-adaptive capabilities to communication systems-computer networks as well as supporting structures such as overlays or middleware-is a major research focus. However, making a communication system self-adaptive is a challenging task for communication system developers. First, the distributed nature of such systems requires the collection of monitoring information from multiple hosts and the adaptation of distributed components. Second, communication systems consist of heterogeneous components, which are, e.g., developed in different programming languages. Third, system developers typically lack knowledge about the development of self-adaptive systems. Hence, this work's overall goal is to allow system developers to focus on making a (legacy) communication system adaptive. Motivated by these observations, this thesis proposes a model-based runtime environment for adapting communication systems called REACT. In contrast to self-adaptation frameworks, which offer a standard way to build self-adaptive applications, we refer to REACT as a runtime environment, i.e., a platform that is additionally able to plan and execute adaptations based on user-specified adaptation behavior. REACT includes the support for decentralized adaptation logics and distributed systems, multiple programming languages, as well as tool support and assistance for developers. The developer support is achieved using model-based techniques for specifying the reconfiguration behavior of the adaptation logic. Also, this thesis proposes an easy-to-follow development process. As part of that, it is needed to monitor the reconfiguration behavior of the self-adaptive system. Hence, this work also presents two dashboard-based visualization approaches called CoalaViz and EnTrace for providing traceability of self-adaptive systems for system developers and administrators. This thesis follows a design science research methodology resulting in the design and implementation of the final artifacts. By that, this dissertation presents different REACT Loops, including specific ways to model and plan the adaptive behavior using satisfiability, mixed-integer linear programming, and constraint solvers. The prototypes of these approaches, including the two visualization solutions, are evaluated in multiple use cases. Therefore, this work provides an end-to-end solution for specifying the adaptive behavior, connecting a managed resource, deploying the system, as well as debugging and monitoring it

    GUIDE FOR THE COLLECTION OF INSTRUSION DATA FOR MALWARE ANALYSIS AND DETECTION IN THE BUILD AND DEPLOYMENT PHASE

    Get PDF
    During the COVID-19 pandemic, when most businesses were not equipped for remote work and cloud computing, we saw a significant surge in ransomware attacks. This study aims to utilize machine learning and artificial intelligence to prevent known and unknown malware threats from being exploited by threat actors when developers build and deploy applications to the cloud. This study demonstrated an experimental quantitative research design using Aqua. The experiment\u27s sample is a Docker image. Aqua checked the Docker image for malware, sensitive data, Critical/High vulnerabilities, misconfiguration, and OSS license. The data collection approach is experimental. Our analysis of the experiment demonstrated how unapproved images were prevented from running anywhere in our environment based on known vulnerabilities, embedded secrets, OSS licensing, dynamic threat analysis, and secure image configuration. In addition to the experiment, the forensic data collected in the build and deployment phase are exploitable vulnerability, Critical/High Vulnerability Score, Misconfiguration, Sensitive Data, and Root User (Super User). Since Aqua generates a detailed audit record for every event during risk assessment and runtime, we viewed two events on the Audit page for our experiment. One of the events caused an alert due to two failed controls (Vulnerability Score, Super User), and the other was a successful event meaning that the image is secure to deploy in the production environment. The primary finding for our study is the forensic data associated with the two events on the Audit page in Aqua. In addition, Aqua validated our security controls and runtime policies based on the forensic data with both events on the Audit page. Finally, the study’s conclusions will mitigate the likelihood that organizations will fall victim to ransomware by mitigating and preventing the total damage caused by a malware attack

    Trusted Computing and Secure Virtualization in Cloud Computing

    Get PDF
    Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment
    corecore