4,992 research outputs found

    Social Computing and Cooperation Services for Connected Government and Cross-Boundary Services Delivery

    Get PDF
    Connected Government requires different government organizations to connect seamlessly across functions, agencies, and jurisdictions in order to deliver effective and efficient services to citizens and businesses. In the countries of the European Union, this also involves the possibility of delivering cross-border services, which is an important step toward a truly united Europe. To achieve this goal, European citizens and businesses should be able to interact with different public administrations in different Member States in a seamless way to perceive them as a single entity. Interoperability, which is a key factor for Connected Government, is not enough in order to achieve this result, since it usually does not consider the social dimension of organizations. This dimension is at the basis of co-operability, which is a form of non-technical interoperability that allows different organizations to function together essentially as a single organization. In this chapter, it is argued that, due to their unique capacity of coupling several technologies and processes with interpersonal styles, awareness, communication tools, and conversational models, the integration of social computing services and tools within inter-organizational workflows can make them more efficient and effective. It can also support the \u201clearning\u201d process that leads different organizations to achieve co-operability

    Diagnosis of Errors in Stalled Inter-Organizational Workflow Processes

    Get PDF
    Fault-tolerant inter-organizational workflow processes help participant organizations efficiently complete their business activities and operations without extended delays. The stalling of inter-organizational workflow processes is a common hurdle that causes organizations immense losses and operational difficulties. The complexity of software requirements, incapability of workflow systems to properly handle exceptions, and inadequate process modeling are the leading causes of errors in the workflow processes. The dissertation effort is essentially about diagnosing errors in stalled inter-organizational workflow processes. The goals and objectives of this dissertation were achieved by designing a fault-tolerant software architecture of workflow system’s components/modules (i.e., workflow process designer, workflow engine, workflow monitoring, workflow administrative panel, service integration, workflow client) relevant to exception handling and troubleshooting. The complexity and improper implementation of software requirements were handled by building a framework of guiding principles and the best practices for modeling and designing inter-organizational workflow processes. Theoretical and empirical/experimental research methodologies were used to find the root causes of errors in stalled workflow processes. Error detection and diagnosis are critical steps that can be further used to design a strategy to resolve the stalled processes. Diagnosis of errors in stalled workflow processes was in scope, but the resolution of stalled workflow process was out of the scope in this dissertation. The software architecture facilitated automatic and semi-automatic diagnostics of errors in stalled workflow processes from real-time and historical perspectives. The empirical/experimental study was justified by creating state-of-the-art inter-organizational workflow processes using an API-based workflow system, a low code workflow automation platform, a supported high-level programming language, and a storage system. The empirical/experimental measurements and dissertation goals were explained by collecting, analyzing, and interpreting the workflow data. The methodology was evaluated based on its ability to diagnose errors successfully (i.e., identifying the root cause) in stalled processes caused by web service failures in the inter-organizational workflow processes. Fourteen datasets were created to analyze, verify, and validate hypotheses and the software architecture. Amongst fourteen datasets, seven datasets were created for end-to-end IOWF process scenarios, including IOWF web service consumption, and seven datasets were for IOWF web service alone. The results of data analysis strongly supported and validated the software architecture and hypotheses. The guiding principles and the best practices of workflow process modeling and designing conclude opportunities to prevent processes from getting stalled. The outcome of the dissertation, i.e., diagnosis of errors in stalled inter-organization processes, can be utilized to resolve these stalled processes

    SIMDAT

    No full text

    Lessons Learned from a Decade of Providing Interactive, On-Demand High Performance Computing to Scientists and Engineers

    Full text link
    For decades, the use of HPC systems was limited to those in the physical sciences who had mastered their domain in conjunction with a deep understanding of HPC architectures and algorithms. During these same decades, consumer computing device advances produced tablets and smartphones that allow millions of children to interactively develop and share code projects across the globe. As the HPC community faces the challenges associated with guiding researchers from disciplines using high productivity interactive tools to effective use of HPC systems, it seems appropriate to revisit the assumptions surrounding the necessary skills required for access to large computational systems. For over a decade, MIT Lincoln Laboratory has been supporting interactive, on-demand high performance computing by seamlessly integrating familiar high productivity tools to provide users with an increased number of design turns, rapid prototyping capability, and faster time to insight. In this paper, we discuss the lessons learned while supporting interactive, on-demand high performance computing from the perspectives of the users and the team supporting the users and the system. Building on these lessons, we present an overview of current needs and the technical solutions we are building to lower the barrier to entry for new users from the humanities, social, and biological sciences.Comment: 15 pages, 3 figures, First Workshop on Interactive High Performance Computing (WIHPC) 2018 held in conjunction with ISC High Performance 2018 in Frankfurt, German

    Information Security in Business Intelligence based on Cloud: A Survey of Key Issues and the Premises of a Proposal

    Get PDF
    International audienceMore sophisticated inter-organizational interactions have generated changes in the way in which organizations make business. Advanced forms of collaborations, such as Business Process as a Service (BPaaS), allow different partners to leverage business intelligence within organizations. However, although it presents powerfull economical and technical benefits, it also arrises some pitfalls about data security, especially when it is mediated by the cloud. In this article, current aspects which have been tackled in the literature related to data risks and accountability are presented. In addition, some open issues are also presented from the analysis of the existing methodologies and techniques proposed in the literature. A final point is made by proposing an approach, which aims at preventive, detective and corrective accountability and data risk management, based on usage control policies and model driven engineering

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations
    • …
    corecore